Online visual tracking by integrating spatio-temporal cues

Yang He, Mingtao Pei, Min Yang, Yuwei Wu, Yunde Jia

科研成果: 期刊稿件文章同行评审

7 引用 (Scopus)

摘要

The performance of online visual trackers has improved significantly, but designing an effective appearance-adaptive model is still a challenging task because of the accumulation of errors during the model updating with newly obtained results, which will cause tracker drift. In this study, the authors propose a novel online tracking algorithm by integrating spatiotemporal cues to alleviate the drift problem. The authors' goal is to develop a more robust way of updating an adaptive appearance model. The model consists of multiple modules called temporal cues, and these modules are updated in an alternate way which can keep both the historical and current information of the tracked object to handle drastic appearance change. Each module is represented by several fragments called spatial cues. In order to incorporate all the spatial and temporal cues, the authors develop an efficient cue quality evaluation criterion that combines appearance and motion information. Then the tracking results are obtained by a two-stage dynamic integration mechanism. Both qualitative and quantitative evaluations on challenging video sequences demonstrate that the proposed algorithm performs more favourably against the state-of-the-art methods.

源语言英语
页(从-至)124-137
页数14
期刊IET Computer Vision
9
1
DOI
出版状态已出版 - 1 2月 2015

指纹

探究 'Online visual tracking by integrating spatio-temporal cues' 的科研主题。它们共同构成独一无二的指纹。

引用此