Deep spatial-temporal joint feature representation for video object detection

Baojun Zhao, Boya Zhao, Linbo Tang*, Yuqi Han, Wenzheng Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

Original languageEnglish
Article number774
JournalSensors
Volume18
Issue number3
DOIs
Publication statusPublished - 4 Mar 2018

Keywords

  • Deep neural network
  • Multiscale feature representation
  • Siamese network
  • Temporal information
  • Video object detection

Fingerprint

Dive into the research topics of 'Deep spatial-temporal joint feature representation for video object detection'. Together they form a unique fingerprint.

Cite this