Temporal dynamic appearance modeling for online multi-person tracking

Min Yang*, Yunde Jia

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

53 引用 (Scopus)

摘要

Robust online multi-person tracking requires the correct associations of online detection responses with existing trajectories. We address this problem by developing a novel appearance modeling approach to provide accurate appearance affinities to guide data association. In contrast to most existing algorithms that only consider the spatial structure of human appearances, we exploit the temporal dynamic characteristics within temporal appearance sequences to discriminate different persons. The temporal dynamic makes a sufficient complement to the spatial structure of varying appearances in the feature space, which significantly improves the affinity measurement between trajectories and detections. We propose a feature selection algorithm to describe the appearance variations with mid-level semantic features, and demonstrate its usefulness in terms of temporal dynamic appearance modeling. Moreover, the appearance model is learned incrementally by alternatively evaluating newly-observed appearances and adjusting the model parameters to be suitable for online tracking. Reliable tracking of multiple persons in complex scenes is achieved by incorporating the learned model into an online tracking-by-detection framework. Our experiments on the challenging benchmark MOTChallenge 2015 [L. Leal-Taixé, A. Milan, I. Reid, S. Roth, K. Schindler, MOTChallenge 2015: Towards a benchmark for multi-target tracking, arXiv preprint arXiv:1504.01942.] demonstrate that our method outperforms the state-of-the-art multi-person tracking algorithms.

源语言英语
页(从-至)16-28
页数13
期刊Computer Vision and Image Understanding
153
DOI
出版状态已出版 - 1 12月 2016

指纹

探究 'Temporal dynamic appearance modeling for online multi-person tracking' 的科研主题。它们共同构成独一无二的指纹。

引用此