Video Visual Relation Detection With Contextual Knowledge Embedding

Qianwen Cao, Heyan Huang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Video visual relation detection (VidVRD) aims at abstracting structured relations in the form of <subject-predicate-object > from videos. The triple formation makes the search space extremely huge and the distribution unbalanced. Usually, existing works predict the relationships from visual, spatial, and semantic cues. Among them, semantic cues are responsible for exploring the semantic connections between objects, which is crucial to transfer knowledge across relations. However, most of these works extract semantic cues via simply mapping the object labels to classified features, which ignore the contextual surroundings, resulting in poor performance for low-frequency relations. To alleviate these issues, we propose a novel network, termed Contextual Knowledge Embedded Relation Network (CKERN), to facilitate VidVRD through establishing contextual knowledge embeddings for detected object pairs in relations from two aspects: commonsense attributes and prior linguistic dependencies. Specifically, we take the pair as a query to extract relational facts in the commonsense knowledge base, then encode them to explicitly construct semantic surroundings for relations. In addition, the statistics of object pairs with different predicates distilled from large-scale visual relations are taken into account to represent the linguistic regularity of relations. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed model.

源语言英语
页(从-至)13083-13095
页数13
期刊IEEE Transactions on Knowledge and Data Engineering
35
12
DOI
出版状态已出版 - 1 12月 2023

指纹

探究 'Video Visual Relation Detection With Contextual Knowledge Embedding' 的科研主题。它们共同构成独一无二的指纹。

引用此