Adaptive multiple appearances model framework for long-term Robust tracking

Shuo Tang, Longfei Zhang*, Jiapeng Chi, Zhufan Wang, Gangyi Ding

*此作品的通讯作者

科研成果: 期刊稿件会议文章同行评审

1 引用 (Scopus)

摘要

Tracking an object in long term is still a great challenge in computer vision. Appearance modeling is one of keys to build a good tracker. Much research attention focuses on building an appearance model by employing special features and learning method, especially online learning. However, one model is not enough to describe all historical appearances of the tracking target during a long term tracking task because of view port exchanging, illuminance varying, camera switching, etc. We propose the Adaptive Multiple Appearance Model (AMAM) framework to maintain not one model but appearance model set to solve this problem. Different appearance representations of the tracking target could be employed and grouped unsupervised and modeled by Dirichlet Process Mixture Model (DPMM) automatically. And tracking result can be selected from candidate targets predicted by trackers based on those appearance models by voting and confidence map. Experimental results on multiple public datasets demonstrate the better performance compared with state-of-the-art methods.

源语言英语
页(从-至)160-170
页数11
期刊Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
9314
DOI
出版状态已出版 - 2015
活动16th Pacific-Rim Conference on Multimedia, PCM 2015 - Gwangju, 韩国
期限: 16 9月 201518 9月 2015

指纹

探究 'Adaptive multiple appearances model framework for long-term Robust tracking' 的科研主题。它们共同构成独一无二的指纹。

引用此