Online video visual relation detection with hierarchical multi-modal fusion

Yuxuan He, Ming Gang Gan*, Qianzhao Ma

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

With the development of artificial intelligence technology, visual scene understanding has become a hot research topic. Online visual relation detection plays an important role in dynamic visual scene understanding. However, the complete modeling of dynamic relations and how to utilize a large amount of video content to infer visual relations are two difficult problems needed to be solved. Therefore, we propose Hierarchical Multi-Modal Fusion network for online video visual relation detection. We propose ASE-GCN to model dynamic scenes from different perspectives in order to fully capture visual relations in dynamic scenes. Meanwhile, we use trajectory features and natural language features as additional auxiliary features to describe the visual scene together with high-level visual features constructed by ASE-GCN. In order to make full use of these information to infer the visual relation, we design Hierarchical Fusion module before the relation predictor, which fuses the multi-role and multi-modal features using the methods based on attention and trilinear pooling. Comparative experiments on the ImageNet-VidVRD dataset demonstrate that our network outperforms other methods, while ablation studies verify the proposed modules are effective.

源语言英语
期刊Multimedia Tools and Applications
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Online video visual relation detection with hierarchical multi-modal fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此