TY - JOUR
T1 - Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering
AU - Cao, Jianjian
AU - Qin, Xiameng
AU - Zhao, Sanyuan
AU - Shen, Jianbing
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2025
Y1 - 2025
N2 - Answering semantically complicated questions according to an image is challenging in a visual question answering (VQA) task. Although the image can be well represented by deep learning, the question is always simply embedded and cannot well indicate its meaning. Besides, the visual and textual features have a gap for different modalities, it is difficult to align and utilize the cross-modality information. In this article, we focus on these two problems and propose a graph matching attention (GMA) network. First, it not only builds graph for the image but also constructs graph for the question in terms of both syntactic and embedding information. Next, we explore the intramodality relationships by a dual-stage graph encoder and then present a bilateral cross-modality GMA to infer the relationships between the image and the question. The updated cross-modality features are then sent into the answer prediction module for final answer prediction. Experiments demonstrate that our network achieves the state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset. The ablation studies verify the effectiveness of each module in our GMA network.
AB - Answering semantically complicated questions according to an image is challenging in a visual question answering (VQA) task. Although the image can be well represented by deep learning, the question is always simply embedded and cannot well indicate its meaning. Besides, the visual and textual features have a gap for different modalities, it is difficult to align and utilize the cross-modality information. In this article, we focus on these two problems and propose a graph matching attention (GMA) network. First, it not only builds graph for the image but also constructs graph for the question in terms of both syntactic and embedding information. Next, we explore the intramodality relationships by a dual-stage graph encoder and then present a bilateral cross-modality GMA to infer the relationships between the image and the question. The updated cross-modality features are then sent into the answer prediction module for final answer prediction. Experiments demonstrate that our network achieves the state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset. The ablation studies verify the effectiveness of each module in our GMA network.
KW - Graph matching attention (GMA)
KW - relational reasoning
KW - visual question answering (VQA)
UR - http://www.scopus.com/inward/record.url?scp=86000426641&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2021.3135655
DO - 10.1109/TNNLS.2021.3135655
M3 - Article
AN - SCOPUS:86000426641
SN - 2162-237X
VL - 36
SP - 4160
EP - 4171
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 3
ER -