Content-Attention Representation by Factorized Action-Scene Network for Action Recognition

Jingyi Hou, Xinxiao Wu, Yuchao Sun, Yunde Jia*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

44 Citations (Scopus)

Abstract

During action recognition in videos, irrelevant motions in the background can greatly degrade the performance of recognizing specific actions with which we actually concern ourself here. In this paper, a novel deep neural network, called factorized action-scene network (FASNet), is proposed to encode and fuse the most relevant and informative semantic cues for action recognition. Specifically, we decompose the FASNet into two components. One is a newly designed encoding network, named content attention network (CANet), which encodes local spatialoral features to learn the action representations with good robustness to the noise of irrelevant motions. The other is a fusion network, which integrates the pretrained CANet to fuse the encoded spatialoral features with contextual scene feature extracted from the same video, for learning more descriptive and discriminative action representations. Moreover, different from the existing deep learning based tasks for generic action recognition, which applies softmax loss function as the training guidance, we formulate two loss functions for guiding the proposed model to accomplish more specific action recognition tasks, i.e., the multilabel correlation loss for multilabel action recognition and the triplet loss for complex event detection. Extensive experiments on the Hollywood2 dataset and the TRECVID MEDTest 14 dataset show that our method achieves superior performance compared with the state-of-the-art methods.

Original languageEnglish
Pages (from-to)1537-1547
Number of pages11
JournalIEEE Transactions on Multimedia
Volume20
Issue number6
DOIs
Publication statusPublished - Jun 2018

Keywords

  • Deep neural network
  • complex event detection
  • multi-label action recognition

Fingerprint

Dive into the research topics of 'Content-Attention Representation by Factorized Action-Scene Network for Action Recognition'. Together they form a unique fingerprint.

Cite this