Tracking articulated objects by learning intrinsic structure of motion

Xinxiao Wu, Wei Liang*, Yunde Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

In this paper, we propose a novel dimensionality reduction method, temporal neighbor preserving embedding (TNPE), to learn the low-dimensional intrinsic motion manifold of articulated objects. The method simultaneously learns the embedding manifold and the mapping from an image feature space to an embedding space by preserving the local temporal relationship hidden in sequential data points. Then tracking is formulated as the problem of estimating the configuration of an articulated object from the learned central embedding representation. To solve this problem, we combine Bayesian mixture of experts (BME) with Gaussian mixture model (GMM) to establish a probabilistic non-linear mapping from the embedding space to the configuration space. The experimental result on articulated hand and human pose tracking shows an encouraging performance on stability and accuracy.

Original languageEnglish
Pages (from-to)267-274
Number of pages8
JournalPattern Recognition Letters
Volume30
Issue number3
DOIs
Publication statusPublished - 1 Feb 2009

Keywords

  • Articulated objects tracking
  • Computer vision
  • Non-linear manifold learning

Fingerprint

Dive into the research topics of 'Tracking articulated objects by learning intrinsic structure of motion'. Together they form a unique fingerprint.

Cite this