Self-tuning motion model for visual tracking

Hangkai Tan*, Qingjie Zhao, Xiongpeng Wang

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

In visual tracking, how to select a suitable motion model is an important problem to deal with, since the movements in real world are always irregular in most cases. We propose a self-tuning motion model for target tracking in this paper, where the current motion model is computed according to the relative distance of the target positions in the last two frames. Our method has achieved excellent performance when experimenting on the sequences where the targets move unstably, abruptly or even when partial occlusion exists, and the method is particularly robust to the unsuitable initial motion model.

源语言英语
主期刊名Cognitive Systems and Signal Processing - 3rd International Conference, ICCSIP 2016, Revised Selected Papers
编辑Fuchun Sun, Huaping Liu, Dewen Hu
出版商Springer Verlag
74-81
页数8
ISBN(印刷版)9789811052293
DOI
出版状态已出版 - 2017
活动3rd International Conference on Cognitive Systems and Information Processing, ICCSIP 2016 - Beijing, 中国
期限: 19 11月 201623 11月 2016

出版系列

姓名Communications in Computer and Information Science
710
ISSN(印刷版)1865-0929

会议

会议3rd International Conference on Cognitive Systems and Information Processing, ICCSIP 2016
国家/地区中国
Beijing
时期19/11/1623/11/16

指纹

探究 'Self-tuning motion model for visual tracking' 的科研主题。它们共同构成独一无二的指纹。

引用此

Tan, H., Zhao, Q., & Wang, X. (2017). Self-tuning motion model for visual tracking. 在 F. Sun, H. Liu, & D. Hu (编辑), Cognitive Systems and Signal Processing - 3rd International Conference, ICCSIP 2016, Revised Selected Papers (页码 74-81). (Communications in Computer and Information Science; 卷 710). Springer Verlag. https://doi.org/10.1007/978-981-10-5230-9_8