A survey of video human action recognition based on deep learning

Chun Yan Bi, Yue Liu*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

1 Citation (Scopus)

Abstract

With the rapid advancement of network multimedia technology and the continuous improvement of video capture equipment, an increasing number of videos are shared on network platforms, gradually becoming an integral part of human life. Consequently, video understanding has become one of the hot spots of computer vision research, with video understanding being a pivotal task. At present, 2D image recognition classification methods based on deep learning have made significant strides. However, video action recognition still faces a formidable challenge. The reason is that videos differ from 2D images by an additional temporal dimension, and that understanding actions such as walking, running, high jumping, and long jumping in videos requires not only the spatial semantic information that 2D images possess but also temporal information. Therefore, effectively utilizing the temporal information of videos is critical for action recognition. This paper firstly introduced the research background and development process of action recognition, followed by an analysis of the current challenges in video action recognition. The methods of temporal modeling and parameter optimization were then presented in detail, along with an examination of the commonly used action recognition datasets and metric parameters. Finally, the paper outlined the future research directions in this field.

Original languageEnglish
Pages (from-to)625-639
Number of pages15
JournalJournal of Graphics
Volume44
Issue number4
DOIs
Publication statusPublished - 31 Aug 2023

Keywords

  • Action recognition
  • Computer vision
  • Convolutional neural network
  • Deep learning
  • Video understanding

Fingerprint

Dive into the research topics of 'A survey of video human action recognition based on deep learning'. Together they form a unique fingerprint.

Cite this