Multimodal matching-aware co-attention networks with mutual knowledge distillation for fake news detection

Linmei Hu*, Ziwang Zhao, Weijian Qi, Xuemeng Song, Liqiang Nie

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

5 引用 (Scopus)

摘要

Fake news often involves multimedia information such as text and image to mislead readers, proliferating and expanding its influence. Most existing fake news detection methods apply the co-attention mechanism to fuse multimodal features while ignoring the consistency of image and text in co-attention. In this paper, we propose multimodal matching-aware co-attention networks with mutual knowledge distillation for improving fake news detection. Specifically, we design an image-text matching-aware co-attention mechanism which captures the alignment of image and text for better multimodal fusion. The image-text matching representation can be obtained via a vision-language pre-trained model. Additionally, based on the designed image-text matching-aware co-attention mechanism, we propose to build two co-attention networks respectively centered on text and image for mutual knowledge distillation to improve fake news detection. Extensive experiments on three benchmark datasets demonstrate that our proposed model outperforms existing methods on multimodal fake news detection.

源语言英语
文章编号120310
期刊Information Sciences
664
DOI
出版状态已出版 - 4月 2024

指纹

探究 'Multimodal matching-aware co-attention networks with mutual knowledge distillation for fake news detection' 的科研主题。它们共同构成独一无二的指纹。

引用此