Video pose estimation with global motion cues

Qingxuan Shi, Huijun Di, Yao Lu*, Feng Lv, Xuedong Tian

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

This paper addresses the problem of pose estimation in video sequences in which human pose changes drastically over time. Popular strategies for video pose estimation first yield multiple pose candidates for each frame and then achieve consistent pose estimation by enforcing temporal constraints across frames. To enrich pose candidates, previous methods typically employ local motion cues to propagate pose detections to adjacent frames. Reasonable pose proposals can be achieved only when the local motion estimation is accurate and good detections exist among adjacent frames, both of which are hard to be satisfied under drastic human pose changes. In this paper, we propose to propagate pose detections to entire video sequence through global motion cues which provide a long term holistic non-rigid motion transformation for the given video. We exploit the temporal continuity of both single parts and part pairs in the inference over a spatio-temporal model to stitch the reasonable trajectory fragments for each part and obtain the final pose estimation. Experimental results demonstrate remarkable performance improvement in comparison with the state-of-the-art methods.

Original languageEnglish
Pages (from-to)269-279
Number of pages11
JournalNeurocomputing
Volume219
DOIs
Publication statusPublished - 5 Jan 2017

Keywords

  • Global motion estimation
  • Pose detection
  • Pose estimation

Fingerprint

Dive into the research topics of 'Video pose estimation with global motion cues'. Together they form a unique fingerprint.

Cite this