Combining multiple deep cues for action recognition

Ruiqi Wang, Xinxiao Wu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, we propose a novel deep learning based framework to fuse multiple cues of action motions, objects and scenes for complex action recognition. Since the deep features achieve promising results, three deep representations are extracted for capturing both temporal and contextual information of actions. Particularly, for the action cue, we first adopt a deep detection model to detect persons frame by frame and then feed the deep representations of persons into a Gated Recurrent Unit model to generate the action features. Different from the existing deep action features, our feature is capable of modeling the global dynamics of long human motion. The scene and object cues are also represented by deep features pooling on all the frames in a video. Moreover, we introduce an l p -norm multiple kernel learning method to effectively combine the multiple deep representations of the video to learn robust classifiers of actions by capturing the contextual relationships between action, object and scene. Extensive experiments on two real-world action datasets (i.e., UCF101 and HMDB51) clearly demonstrate the effectiveness of our method.

Original languageEnglish
Pages (from-to)9933-9950
Number of pages18
JournalMultimedia Tools and Applications
Volume78
Issue number8
DOIs
Publication statusPublished - 1 Apr 2019

Keywords

  • Action recognition
  • Multiple deep cues
  • l -norm multiple kernel learning

Fingerprint

Dive into the research topics of 'Combining multiple deep cues for action recognition'. Together they form a unique fingerprint.

Cite this