Deep spatial-temporal joint feature representation for video object detection

Baojun Zhao, Boya Zhao, Linbo Tang*, Yuqi Han, Wenzheng Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

20 引用 (Scopus)

摘要

With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

源语言英语
文章编号774
期刊Sensors
18
3
DOI
出版状态已出版 - 4 3月 2018

指纹

探究 'Deep spatial-temporal joint feature representation for video object detection' 的科研主题。它们共同构成独一无二的指纹。

引用此