TY - GEN
T1 - Human action recognition using discriminative models in the learned hierarchical manifold space
AU - Lei, Han
AU - Wei, Liang
AU - Xinxiao, Wu
AU - Yunde, Jia
PY - 2008
Y1 - 2008
N2 - A hierarchical learning based approach for human action recognition is proposed in this paper. It consists of hierarchical nonlinear dimensionality reduction based feature extraction and cascade discriminative model based action modeling. Human actions are inferred from human body joint motions and human bodies are decomposed into several physiological body parts according to inherent hierarchy (e.g. right arm, left arm and head all belong to upper body). We explore the underlying hierarchical structures of high-dimensional human pose space using Hierarchical Gaussian Process Latent Variable Model (HGPLVM) and learn a representative motion pattern set for each body part. In the hierarchical manifold space, the bottom-up cascade Conditional Random Fields (CRFs) are used to predict the corresponding motion pattern in each manifold subspace, and then the final action label is estimated for each observation by a discriminative classifier on the current motion pattern set.
AB - A hierarchical learning based approach for human action recognition is proposed in this paper. It consists of hierarchical nonlinear dimensionality reduction based feature extraction and cascade discriminative model based action modeling. Human actions are inferred from human body joint motions and human bodies are decomposed into several physiological body parts according to inherent hierarchy (e.g. right arm, left arm and head all belong to upper body). We explore the underlying hierarchical structures of high-dimensional human pose space using Hierarchical Gaussian Process Latent Variable Model (HGPLVM) and learn a representative motion pattern set for each body part. In the hierarchical manifold space, the bottom-up cascade Conditional Random Fields (CRFs) are used to predict the corresponding motion pattern in each manifold subspace, and then the final action label is estimated for each observation by a discriminative classifier on the current motion pattern set.
UR - http://www.scopus.com/inward/record.url?scp=67650691101&partnerID=8YFLogxK
U2 - 10.1109/AFGR.2008.4813416
DO - 10.1109/AFGR.2008.4813416
M3 - Conference contribution
AN - SCOPUS:67650691101
SN - 9781424421541
T3 - 2008 8th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2008
BT - 2008 8th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2008
T2 - 2008 8th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2008
Y2 - 17 September 2008 through 19 September 2008
ER -