TY - GEN
T1 - Jointly learning a multi-class discriminative dictionary for robust visual tracking
AU - Liu, Zhao
AU - Pei, Mingtao
AU - Zhang, Chi
AU - Zhu, Mingda
N1 - Publisher Copyright:
© Springer International Publishing AG 2016.
PY - 2016
Y1 - 2016
N2 - Discriminative dictionary learning (DDL) provides an appealing paradigm for appearance modeling in visual tracking due to its superior discrimination power. However, most existing DDL based trackers usually cannot handle the drastic appearance changes, especially for scenarios with background cluster and/or similar object interference. One reason is that they often encounter loss of subtle visual information that is critical to distinguish the object from the distracters. In this paper, we propose a robust tracker via jointly learning a multi-class discriminative dictionary. Our DDL method exploits concurrently the intra-class visual information and inter-class visual correlations to learn the shared dictionary and the class-specific dictionaries. By imposing several discrimination constraints into the objective function, the learnt dictionary is reconstructive, compressive and discriminative, thus can achieve better discriminate the object from the background. Tracking is carried out within a Bayesian inference framework where the joint decision measure is used to construct the observation model. Evaluations on the benchmark dataset demonstrate that the proposed algorithm achieves substantially better overall performance against the state-of-the-art trackers.
AB - Discriminative dictionary learning (DDL) provides an appealing paradigm for appearance modeling in visual tracking due to its superior discrimination power. However, most existing DDL based trackers usually cannot handle the drastic appearance changes, especially for scenarios with background cluster and/or similar object interference. One reason is that they often encounter loss of subtle visual information that is critical to distinguish the object from the distracters. In this paper, we propose a robust tracker via jointly learning a multi-class discriminative dictionary. Our DDL method exploits concurrently the intra-class visual information and inter-class visual correlations to learn the shared dictionary and the class-specific dictionaries. By imposing several discrimination constraints into the objective function, the learnt dictionary is reconstructive, compressive and discriminative, thus can achieve better discriminate the object from the background. Tracking is carried out within a Bayesian inference framework where the joint decision measure is used to construct the observation model. Evaluations on the benchmark dataset demonstrate that the proposed algorithm achieves substantially better overall performance against the state-of-the-art trackers.
UR - http://www.scopus.com/inward/record.url?scp=85006934361&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-48896-7_54
DO - 10.1007/978-3-319-48896-7_54
M3 - Conference contribution
AN - SCOPUS:85006934361
SN - 9783319488950
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 550
EP - 560
BT - Advances in Multimedia Information Processing – 17th Pacific-Rim Conference on Multimedia, PCM 2016, Proceedings
A2 - Chen, Enqing
A2 - Tie, Yun
A2 - Gong, Yihong
PB - Springer Verlag
T2 - 17th Pacific-Rim Conference on Multimedia, PCM 2016
Y2 - 15 September 2016 through 16 September 2016
ER -