摘要
The performance of online visual trackers has improved significantly, but designing an effective appearance-adaptive model is still a challenging task because of the accumulation of errors during the model updating with newly obtained results, which will cause tracker drift. In this study, the authors propose a novel online tracking algorithm by integrating spatiotemporal cues to alleviate the drift problem. The authors' goal is to develop a more robust way of updating an adaptive appearance model. The model consists of multiple modules called temporal cues, and these modules are updated in an alternate way which can keep both the historical and current information of the tracked object to handle drastic appearance change. Each module is represented by several fragments called spatial cues. In order to incorporate all the spatial and temporal cues, the authors develop an efficient cue quality evaluation criterion that combines appearance and motion information. Then the tracking results are obtained by a two-stage dynamic integration mechanism. Both qualitative and quantitative evaluations on challenging video sequences demonstrate that the proposed algorithm performs more favourably against the state-of-the-art methods.
源语言 | 英语 |
---|---|
页(从-至) | 124-137 |
页数 | 14 |
期刊 | IET Computer Vision |
卷 | 9 |
期 | 1 |
DOI | |
出版状态 | 已出版 - 1 2月 2015 |