Cross-modal complementary network with hierarchical fusion for multimodal sentiment classification

Cheng Peng, Chunxia Zhang*, Xiaojun Xue, Jiameng Gao, Hongjian Liang, Zhengdong Niu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

23 引用 (Scopus)

摘要

Multimodal Sentiment Classification (MSC) uses multimodal data, such as images and texts, to identify the users' sentiment polarities from the information posted by users on the Internet. MSC has attracted considerable attention because of its wide applications in social computing and opinion mining. However, improper correlation strategies can cause erroneous fusion as the texts and the images that are unrelated to each other may integrate. Moreover, simply concatenating them modal by modal, even with true correlation, cannot fully capture the features within and between modals. To solve these problems, this paper proposes a Cross-Modal Complementary Network (CMCN) with hierarchical fusion for MSC. The CMCN is designed as a hierarchical structure with three key modules, namely, the feature extraction module to extract features from texts and images, the feature attention module to learn both text and image attention features generated by an image-text correlation generator, and the cross-modal hierarchical fusion module to fuse features within and between modals. Such a CMCN provides a hierarchical fusion framework that can fully integrate different modal features and helps reduce the risk of integrating unrelated modal features. Extensive experimental results on three public datasets show that the proposed approach significantly outperforms the state-of-the-art methods.

源语言英语
页(从-至)664-679
页数16
期刊Tsinghua Science and Technology
27
4
DOI
出版状态已出版 - 1 8月 2022

指纹

探究 'Cross-modal complementary network with hierarchical fusion for multimodal sentiment classification' 的科研主题。它们共同构成独一无二的指纹。

引用此