A deep model combining structural features and context cues for action recognition in static images

Xinxin Wang, Kan Li*, Yang Li

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

In this paper, we present a deep model for the task of action recognition in static images, which combines body structural information and context cues to build a more accurate classifier. Moreover, to construct more semantic and robust body structural features, we propose a new body descriptor, named limb angle discriptor(LAD), which uses the relative angles between the limbs in 2D skeleton. We evaluate our method on the PASCAL VOC 2012 Action dataset and compare it with the published results. The result shows that our method achieves 90.6% mean AP, outperforming the previous state-of-art approaches in the field.

Original languageEnglish
Title of host publicationNeural Information Processing - 24th International Conference, ICONIP 2017, Proceedings
EditorsDerong Liu, Shengli Xie, Dongbin Zhao, Yuanqing Li, El-Sayed M. El-Alfy
PublisherSpringer Verlag
Pages622-632
Number of pages11
ISBN (Print)9783319701356
DOIs
Publication statusPublished - 2017
Event24th International Conference on Neural Information Processing, ICONIP 2017 - Guangzhou, China
Duration: 14 Nov 201718 Nov 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10639 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference24th International Conference on Neural Information Processing, ICONIP 2017
Country/TerritoryChina
CityGuangzhou
Period14/11/1718/11/17

Keywords

  • Action recognition
  • Body descriptor
  • Context cue
  • Deep model

Fingerprint

Dive into the research topics of 'A deep model combining structural features and context cues for action recognition in static images'. Together they form a unique fingerprint.

Cite this