Abstract
This paper addresses the problem of pose estimation in video sequences in which human pose changes drastically over time. Popular strategies for video pose estimation first yield multiple pose candidates for each frame and then achieve consistent pose estimation by enforcing temporal constraints across frames. To enrich pose candidates, previous methods typically employ local motion cues to propagate pose detections to adjacent frames. Reasonable pose proposals can be achieved only when the local motion estimation is accurate and good detections exist among adjacent frames, both of which are hard to be satisfied under drastic human pose changes. In this paper, we propose to propagate pose detections to entire video sequence through global motion cues which provide a long term holistic non-rigid motion transformation for the given video. We exploit the temporal continuity of both single parts and part pairs in the inference over a spatio-temporal model to stitch the reasonable trajectory fragments for each part and obtain the final pose estimation. Experimental results demonstrate remarkable performance improvement in comparison with the state-of-the-art methods.
Original language | English |
---|---|
Pages (from-to) | 269-279 |
Number of pages | 11 |
Journal | Neurocomputing |
Volume | 219 |
DOIs | |
Publication status | Published - 5 Jan 2017 |
Keywords
- Global motion estimation
- Pose detection
- Pose estimation