VSRN: Visual-Semantic Relation Network for Video Visual Relation Inference

Qianwen Cao, Heyan Huang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Video visual relation inference refers to the task of automatically detecting the relation triplets between the observed objects in videos with the form of ${ < subject, predicate, object>}$ , which requires correctly labeling each detected object and their interaction predicates. Despite the recent advances in image visual relation detection using deep learning techniques, relation inference in videos remains a challenging topic. On one hand, since the introduction of temporal information, it needs to model the rich spatio-temporal visual information for objects and videos. On the other hand, wild videos are often annotated with incomplete relation triplet tags and a few of them are semantically overlapped. However, previous methods adopt hand-crafted visual features extracted from the trajectories, describing local appearance characteristics of isolated objects. And they treat the problem as a multi-class classification task, which makes the relation tags mutually exclusive. To address the above issues, we propose a novel model, termed Visual-Semantic Relation Network (VSRN). In this network, we leverage three-dimensional convolution kernel to capture spatio-temporal features, and encode global visual features in videos through pooling operation on each time slice. Moreover, the semantic collocations between objects are also incorporated so as to obtain comprehensive representations of the relationships. For relation classification, we treat the problem as a multi-label classification task and regard each tag to be independent to predict various relationships. Additionally, we modify commonly used evaluation metric, video-wise recall, to a pair-wise metric (Roop) for testing the performance of models in predicting multiple relationships for the object pairs, Extensive experimental results on two large-scale datasets demonstrate the effectiveness of our proposed model which significantly outperforms the previous works.

Original languageEnglish
Pages (from-to)768-777
Number of pages10
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume32
Issue number2
DOIs
Publication statusPublished - 1 Feb 2022

Keywords

  • Feature representation
  • Neural network
  • Video analysis
  • Visual relation inference

Fingerprint

Dive into the research topics of 'VSRN: Visual-Semantic Relation Network for Video Visual Relation Inference'. Together they form a unique fingerprint.

Cite this