Cross-modal complementary network with hierarchical fusion for multimodal sentiment classification

Cheng Peng, Chunxia Zhang*, Xiaojun Xue, Jiameng Gao, Hongjian Liang, Zhengdong Niu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

Multimodal Sentiment Classification (MSC) uses multimodal data, such as images and texts, to identify the users' sentiment polarities from the information posted by users on the Internet. MSC has attracted considerable attention because of its wide applications in social computing and opinion mining. However, improper correlation strategies can cause erroneous fusion as the texts and the images that are unrelated to each other may integrate. Moreover, simply concatenating them modal by modal, even with true correlation, cannot fully capture the features within and between modals. To solve these problems, this paper proposes a Cross-Modal Complementary Network (CMCN) with hierarchical fusion for MSC. The CMCN is designed as a hierarchical structure with three key modules, namely, the feature extraction module to extract features from texts and images, the feature attention module to learn both text and image attention features generated by an image-text correlation generator, and the cross-modal hierarchical fusion module to fuse features within and between modals. Such a CMCN provides a hierarchical fusion framework that can fully integrate different modal features and helps reduce the risk of integrating unrelated modal features. Extensive experimental results on three public datasets show that the proposed approach significantly outperforms the state-of-the-art methods.

Original languageEnglish
Pages (from-to)664-679
Number of pages16
JournalTsinghua Science and Technology
Volume27
Issue number4
DOIs
Publication statusPublished - 1 Aug 2022

Keywords

  • Cross-Modal Complementary Network (CMCN)
  • hierarchical fusion
  • joint optimization
  • multimodal fusion
  • multimodal sentiment analysis

Fingerprint

Dive into the research topics of 'Cross-modal complementary network with hierarchical fusion for multimodal sentiment classification'. Together they form a unique fingerprint.

Cite this