IIU: Independent Inference Units for Knowledge-Based Visual Question Answering

  • Yili Li
  • , Jing Yu*
  • , Keke Gai
  • , Gang Xiong
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

Knowledge-based visual question answering requires external knowledge beyond visible content to answer the question correctly. One limitation of existing methods is that they focus more on modeling the inter-modal and intra-modal correlations, which entangles complex multimodal clues by implicit embeddings and lacks interpretability and generalization ability. The key challenge to solve the above problem is to separate the information and process it separately at the functional level. By reusing each processing unit, the generalization ability of the model to deal with different data can be increased. In this paper, we propose Independent Inference Units (IIU) for fine-grained multi-modal reasoning to decompose intra-modal information by the functionally independent units. Specifically, IIU processes each semantic-specific intra-modal clue by an independent inference unit, which also collects complementary information by communication from different units. To further reduce the impact of redundant information, we propose a memory update module to maintain semantic-relevant memory along with the reasoning process gradually. In comparison with existing non-pretrained multi-modal reasoning models on standard datasets, our model achieves a new state-of-the-art, enhancing performance by 3%, and surpassing basic pretrained multi-modal models. The experimental results show that our IIU model is effective in disentangling intra-modal clues as well as reasoning units to provide explainable reasoning evidence. Our code is available at https://github.com/Lilidamowang/IIU.

Original languageEnglish
Title of host publicationKnowledge Science, Engineering and Management - 17th International Conference, KSEM 2024, Proceedings
EditorsCungeng Cao, Huajun Chen, Liang Zhao, Junaid Arshad, Yonghao Wang, Taufiq Asyhari
PublisherSpringer Science and Business Media Deutschland GmbH
Pages109-120
Number of pages12
ISBN (Print)9789819755004
DOIs
Publication statusPublished - 2024
Event17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024 - Birmingham, United Kingdom
Duration: 16 Aug 202418 Aug 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14887 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024
Country/TerritoryUnited Kingdom
CityBirmingham
Period16/08/2418/08/24

Keywords

  • Cross-Modal Learning
  • Knowledge Reasoning
  • Visual Question Answering

Fingerprint

Dive into the research topics of 'IIU: Independent Inference Units for Knowledge-Based Visual Question Answering'. Together they form a unique fingerprint.

Cite this