A discriminative structural model for joint segmentation and recognition of human actions

Cuiwei Liu*, Jingyi Hou, Xinxiao Wu, Yunde Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Achieving joint segmentation and recognition of continuous actions in a long-term video is a challenging task due to the varying durations of actions and the complex transitions of multiple actions. In this paper, a novel discriminative structural model is proposed for splitting a long-term video into segments and annotating the action label of each segment. A set of state variables is introduced into the model to explore discriminative semantic concepts shared among different actions. To exploit the statistical dependences among segments, temporal context is captured at both the action level and the semantic concept level. The state variables are treated as latent information in the discriminative structural model and inferred during both training and testing. Experiments on multi-view IXMAS and realistic Hollywood datasets demonstrate the effectiveness of the proposed method.

Original languageEnglish
Pages (from-to)31627-31645
Number of pages19
JournalMultimedia Tools and Applications
Volume77
Issue number24
DOIs
Publication statusPublished - 1 Dec 2018

Keywords

  • Action recognition
  • Action segmentation
  • Discriminative structural model

Fingerprint

Dive into the research topics of 'A discriminative structural model for joint segmentation and recognition of human actions'. Together they form a unique fingerprint.

Cite this