TY - JOUR
T1 - Bidirectional LSTM with saliency-aware 3D-CNN features for human action recognition
AU - Arif, Sheeraz
AU - Wang, Jing
AU - Siddiqui, Adnan Ahmed
AU - Hussain, Rashid
AU - Hussain, Fida
N1 - Publisher Copyright:
© 2021 University of Kuwait. All rights reserved.
PY - 2021/9/2
Y1 - 2021/9/2
N2 - Deep convolutional neural network (DCNN) and recurrent neural network (RNN) have been proved as an imperious research area in multimedia understanding and obtained remarkable action recognition performance. However, videos contain rich motion information with varying dimensions. Existing recurrent based pipelines fail to capture long-term motion dynamics in videos with various motion scales and complex actions performed by multiple actors. Consideration of contextual and salient features is more important than mapping a video frame into a static video representation. This research work provides a novel pipeline by analyzing and processing the video information using a 3D convolution (C3D) network and newly introduced deep bidirectional LSTM. Like popular two-stream convent, we also introduce a two-stream framework with one modification; that is, we replace the optical flow stream by saliency-aware stream to avoid the computational complexity. First, we generate a saliency-aware video stream by applying the saliency-aware method. Secondly, a two-stream 3D-convolutional network (C3D) is utilized with two different types of streams, i.e., RGB stream and saliency-aware video stream, to collect both spatial and semantic temporal features. Next, a deep bidirectional LSTM network is used to learn sequential deep temporal dynamics. Finally, time-series-pooling-layer and softmax-layers classify human activity and behavior. The introduced system can learn long-term temporal dependencies and can predict complex human actions. Experimental results demonstrate the significant improvement in action recognition accuracy on different benchmark datasets.
AB - Deep convolutional neural network (DCNN) and recurrent neural network (RNN) have been proved as an imperious research area in multimedia understanding and obtained remarkable action recognition performance. However, videos contain rich motion information with varying dimensions. Existing recurrent based pipelines fail to capture long-term motion dynamics in videos with various motion scales and complex actions performed by multiple actors. Consideration of contextual and salient features is more important than mapping a video frame into a static video representation. This research work provides a novel pipeline by analyzing and processing the video information using a 3D convolution (C3D) network and newly introduced deep bidirectional LSTM. Like popular two-stream convent, we also introduce a two-stream framework with one modification; that is, we replace the optical flow stream by saliency-aware stream to avoid the computational complexity. First, we generate a saliency-aware video stream by applying the saliency-aware method. Secondly, a two-stream 3D-convolutional network (C3D) is utilized with two different types of streams, i.e., RGB stream and saliency-aware video stream, to collect both spatial and semantic temporal features. Next, a deep bidirectional LSTM network is used to learn sequential deep temporal dynamics. Finally, time-series-pooling-layer and softmax-layers classify human activity and behavior. The introduced system can learn long-term temporal dependencies and can predict complex human actions. Experimental results demonstrate the significant improvement in action recognition accuracy on different benchmark datasets.
KW - Action recognition
KW - Convolutional neural network (CNN)
KW - Long-short-term-memory (LSTM)
KW - Recurrent neural network (RNN)
KW - Saliency
UR - http://www.scopus.com/inward/record.url?scp=85114193096&partnerID=8YFLogxK
U2 - 10.36909/jer.v9i3A.8383
DO - 10.36909/jer.v9i3A.8383
M3 - Article
AN - SCOPUS:85114193096
SN - 2307-1877
VL - 9
SP - 115
EP - 133
JO - Journal of Engineering Research (Kuwait)
JF - Journal of Engineering Research (Kuwait)
IS - 3
ER -