3-D Relation Network for visual relation recognition in videos

  • Qianwen Cao*
  • , Heyan Huang
  • , Xindi Shang
  • , Boran Wang
  • , Tat Seng Chua
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Citations (Scopus)

Abstract

Video visual relation recognition aims at mining the dynamic relation instances between objects in the form of 〈subject,predicate,object〉, such as “person1-towards-person2” and “person-ride-bicycle”. Existing solutions treat the problem as several independent sub-tasks, i.e., image object detection, video object tracking and trajectory-based relation prediction. We argue that such separation results in the lack of information flow between different sub-models, which creates redundant representation while each sub-task cannot share a common set of task-specific features. Toward this end, we connect these three sub-tasks in an end-to-end manner by proposing the 3-D relation proposal that serves as a bridge for relation feature learning. Specifically, we put forward a novel deep neural network, named 3DRN, to fuse the spatio-temporal visual characteristics, object label features, and spatial interactive features for learning the relation instances with multi-modal cues. In addition, a three-staged training strategy is also provided to facilitate large-scale parameter optimization. We conduct extensive experiments on two public datasets with different emphasis to demonstrate the effectiveness of the proposed end-to-end feature learning method for visual relation recognition in videos. Furthermore, we verify the potential of our approach by tackling the video relation detection task.

Original languageEnglish
Pages (from-to)91-100
Number of pages10
JournalNeurocomputing
Volume432
DOIs
Publication statusPublished - 7 Apr 2021

Keywords

  • Computer vision
  • Deep neural network
  • Video visual relation recognition
  • Visual relation detection

Fingerprint

Dive into the research topics of '3-D Relation Network for visual relation recognition in videos'. Together they form a unique fingerprint.

Cite this