Visual tracking via sparsity pattern learning

Yuxi Wang, Yue Liu, Zhuwen Li, Loong Fah Cheong, Haibin Ling

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Recently sparse representation has been applied to visual tracking by modeling the target appearance using a sparse approximation over the template set. However, this approach is limited by the high computational cost of the ℓ1-norm minimization involved, which also impacts on the amount of particle samples that we can have. This paper introduces a basic constraint on the self-representation of the target set. The sparsity pattern in the self-representation allows us to recover the 'sparse coefficients' of the candidate samples by some small-scale ℓ2-norm minimization; this results in a fast tracking algorithm. It also leads to a principled dictionary update mechanism which is crucial for good performance. Experiments on a recently released benchmark with 50 challenging video sequences show significant runtime efficiency and tracking accuracy achieved by the proposed algorithm.

源语言英语
主期刊名2016 23rd International Conference on Pattern Recognition, ICPR 2016
出版商Institute of Electrical and Electronics Engineers Inc.
2716-2721
页数6
ISBN(电子版)9781509048472
DOI
出版状态已出版 - 1 1月 2016
活动23rd International Conference on Pattern Recognition, ICPR 2016 - Cancun, 墨西哥
期限: 4 12月 20168 12月 2016

出版系列

姓名Proceedings - International Conference on Pattern Recognition
0
ISSN(印刷版)1051-4651

会议

会议23rd International Conference on Pattern Recognition, ICPR 2016
国家/地区墨西哥
Cancun
时期4/12/168/12/16

指纹

探究 'Visual tracking via sparsity pattern learning' 的科研主题。它们共同构成独一无二的指纹。

引用此

Wang, Y., Liu, Y., Li, Z., Cheong, L. F., & Ling, H. (2016). Visual tracking via sparsity pattern learning. 在 2016 23rd International Conference on Pattern Recognition, ICPR 2016 (页码 2716-2721). 文章 7900046 (Proceedings - International Conference on Pattern Recognition; 卷 0). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICPR.2016.7900046