Auto learning temporal atomic actions for activity classification

Jiangen Zhang, Benjamin Yao, Yongtian Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, we present a model for learning atomic actions for complex activities classification. A video sequence is first represented by a collection of visual interest points. Then the model automatically clusters visual words into atomic actions (topics) based on their co-occurrence and temporal proximity in the same activity category using an extension of hierarchical Dirichlet process (HDP) mixture model. Our approach is robust to noisy interest points caused by various conditions because HDP is a generative model. Finally, we use both a Naive Bayesian and a linear SVM classifier for the problem of activity classification. We first use the intermediate result of a synthetic example to demonstrate the superiority of our model, then we apply our model on the complex Olympic Sport 16-class dataset and show that it outperforms other state-of-art methods.

Original languageEnglish
Pages (from-to)1789-1798
Number of pages10
JournalPattern Recognition
Volume46
Issue number7
DOIs
Publication statusPublished - Jul 2013

Keywords

  • Activity classification
  • Atomic action
  • Temporal-HDP

Fingerprint

Dive into the research topics of 'Auto learning temporal atomic actions for activity classification'. Together they form a unique fingerprint.

Cite this