Auto learning temporal atomic actions for activity classification

Jiangen Zhang, Benjamin Yao, Yongtian Wang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

In this paper, we present a model for learning atomic actions for complex activities classification. A video sequence is first represented by a collection of visual interest points. Then the model automatically clusters visual words into atomic actions (topics) based on their co-occurrence and temporal proximity in the same activity category using an extension of hierarchical Dirichlet process (HDP) mixture model. Our approach is robust to noisy interest points caused by various conditions because HDP is a generative model. Finally, we use both a Naive Bayesian and a linear SVM classifier for the problem of activity classification. We first use the intermediate result of a synthetic example to demonstrate the superiority of our model, then we apply our model on the complex Olympic Sport 16-class dataset and show that it outperforms other state-of-art methods.

源语言英语
页(从-至)1789-1798
页数10
期刊Pattern Recognition
46
7
DOI
出版状态已出版 - 7月 2013

指纹

探究 'Auto learning temporal atomic actions for activity classification' 的科研主题。它们共同构成独一无二的指纹。

引用此