Online video visual relation detection with hierarchical multi-modal fusion

Yuxuan He, Ming Gang Gan*, Qianzhao Ma

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the development of artificial intelligence technology, visual scene understanding has become a hot research topic. Online visual relation detection plays an important role in dynamic visual scene understanding. However, the complete modeling of dynamic relations and how to utilize a large amount of video content to infer visual relations are two difficult problems needed to be solved. Therefore, we propose Hierarchical Multi-Modal Fusion network for online video visual relation detection. We propose ASE-GCN to model dynamic scenes from different perspectives in order to fully capture visual relations in dynamic scenes. Meanwhile, we use trajectory features and natural language features as additional auxiliary features to describe the visual scene together with high-level visual features constructed by ASE-GCN. In order to make full use of these information to infer the visual relation, we design Hierarchical Fusion module before the relation predictor, which fuses the multi-role and multi-modal features using the methods based on attention and trilinear pooling. Comparative experiments on the ImageNet-VidVRD dataset demonstrate that our network outperforms other methods, while ablation studies verify the proposed modules are effective.

Original languageEnglish
JournalMultimedia Tools and Applications
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Graph network
  • Multi-modal features
  • Visual relation detection

Fingerprint

Dive into the research topics of 'Online video visual relation detection with hierarchical multi-modal fusion'. Together they form a unique fingerprint.

Cite this