Abstract
A novel framework based on action recognition feedback for pose reconstruction of articulated human body from monocular images is proposed in this paper. The intrinsic ambiguity caused by perspective projection makes it difficult to accurately recover articulated poses from monocular images. To alleviate such ambiguity, we exploit the high-level motion knowledge as action recognition feedback to discard those implausible estimates and generate more accurate pose candidates using large number of motion constraints during natural human movement. The motion knowledge is represented by both local and global motion constraints. The local spatial constraint captures motion correlation between body parts by multiple relevance vector machines while the global temporal constraint preserves temporal coherence between time-ordered poses via a manifold motion template. Experiments on the CMU Mocap database demonstrate that our method performs better on estimation accuracy than other methods without action recognition feedback.
Original language | English |
---|---|
Pages (from-to) | 1077-1085 |
Number of pages | 9 |
Journal | Pattern Recognition Letters |
Volume | 30 |
Issue number | 12 |
DOIs | |
Publication status | Published - 1 Sept 2009 |
Keywords
- Action recognition feedback
- Human pose reconstruction
- Manifold motion template
- Motion correlation