TY - JOUR
T1 - Deep spatial-temporal joint feature representation for video object detection
AU - Zhao, Baojun
AU - Zhao, Boya
AU - Tang, Linbo
AU - Han, Yuqi
AU - Wang, Wenzheng
N1 - Publisher Copyright:
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. Th.
PY - 2018/3/4
Y1 - 2018/3/4
N2 - With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).
AB - With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).
KW - Deep neural network
KW - Multiscale feature representation
KW - Siamese network
KW - Temporal information
KW - Video object detection
UR - http://www.scopus.com/inward/record.url?scp=85042860239&partnerID=8YFLogxK
U2 - 10.3390/s18030774
DO - 10.3390/s18030774
M3 - Article
C2 - 29510529
AN - SCOPUS:85042860239
SN - 1424-8220
VL - 18
JO - Sensors
JF - Sensors
IS - 3
M1 - 774
ER -