Visual tracking via sparsity pattern learning

Yuxi Wang, Yue Liu, Zhuwen Li, Loong Fah Cheong, Haibin Ling

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Recently sparse representation has been applied to visual tracking by modeling the target appearance using a sparse approximation over the template set. However, this approach is limited by the high computational cost of the ℓ1-norm minimization involved, which also impacts on the amount of particle samples that we can have. This paper introduces a basic constraint on the self-representation of the target set. The sparsity pattern in the self-representation allows us to recover the 'sparse coefficients' of the candidate samples by some small-scale ℓ2-norm minimization; this results in a fast tracking algorithm. It also leads to a principled dictionary update mechanism which is crucial for good performance. Experiments on a recently released benchmark with 50 challenging video sequences show significant runtime efficiency and tracking accuracy achieved by the proposed algorithm.

Original languageEnglish
Title of host publication2016 23rd International Conference on Pattern Recognition, ICPR 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2716-2721
Number of pages6
ISBN (Electronic)9781509048472
DOIs
Publication statusPublished - 1 Jan 2016
Event23rd International Conference on Pattern Recognition, ICPR 2016 - Cancun, Mexico
Duration: 4 Dec 20168 Dec 2016

Publication series

NameProceedings - International Conference on Pattern Recognition
Volume0
ISSN (Print)1051-4651

Conference

Conference23rd International Conference on Pattern Recognition, ICPR 2016
Country/TerritoryMexico
CityCancun
Period4/12/168/12/16

Fingerprint

Dive into the research topics of 'Visual tracking via sparsity pattern learning'. Together they form a unique fingerprint.

Cite this