摘要
A novel framework based on action recognition feedback for pose reconstruction of articulated human body from monocular images is proposed in this paper. The intrinsic ambiguity caused by perspective projection makes it difficult to accurately recover articulated poses from monocular images. To alleviate such ambiguity, we exploit the high-level motion knowledge as action recognition feedback to discard those implausible estimates and generate more accurate pose candidates using large number of motion constraints during natural human movement. The motion knowledge is represented by both local and global motion constraints. The local spatial constraint captures motion correlation between body parts by multiple relevance vector machines while the global temporal constraint preserves temporal coherence between time-ordered poses via a manifold motion template. Experiments on the CMU Mocap database demonstrate that our method performs better on estimation accuracy than other methods without action recognition feedback.
源语言 | 英语 |
---|---|
页(从-至) | 1077-1085 |
页数 | 9 |
期刊 | Pattern Recognition Letters |
卷 | 30 |
期 | 12 |
DOI | |
出版状态 | 已出版 - 1 9月 2009 |