A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations

Yazhou Zhang, Jinglin Wang, Yaochen Liu, Lu Rong, Qian Zheng*, Dawei Song, Prayag Tiwari, Jing Qin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

44 Citations (Scopus)

Abstract

Sarcasm, sentiment and emotion are tightly coupled with each other in that one helps the understanding of another, which makes the joint recognition of sarcasm, sentiment and emotion in conversation a focus in the research in artificial intelligence (AI) and affective computing. Three main challenges exist: Context dependency, multimodal fusion and multitask interaction. However, most of the existing works fail to explicitly leverage and model the relationships among related tasks. In this paper, we aim to generically address the three problems with a multimodal joint framework. We thus propose a multimodal multitask learning model based on the encoder–decoder architecture, termed M2Seq2Seq. At the heart of the encoder module are two attention mechanisms, i.e., intramodal (Ia) attention and intermodal (Ie) attention. Ia attention is designed to capture the contextual dependency between adjacent utterances, while Ie attention is designed to model multimodal interactions. In contrast, we design two kinds of multitask learning (MTL) decoders, i.e., single-level and multilevel decoders, to explore their potential. More specifically, the core of a single-level decoder is a masked outer-modal (Or) self-attention mechanism. The main motivation of Or attention is to explicitly model the interdependence among the tasks of sarcasm, sentiment and emotion recognition. The core of the multilevel decoder contains the shared gating and task-specific gating networks. Comprehensive experiments on four bench datasets, MUStARD, Memotion, CMU-MOSEI and MELD, prove the effectiveness of M2Seq2Seq over state-of-the-art baselines (e.g., CM-GCN, A-MTL) with significant improvements of 1.9%, 2.0%, 5.0%, 0.8%, 4.3%, 3.1%, 2.8%, 1.0%, 1.7% and 2.8% in terms of Micro F1.

Original languageEnglish
Pages (from-to)282-301
Number of pages20
JournalInformation Fusion
Volume93
DOIs
Publication statusPublished - May 2023

Keywords

  • Affective computing
  • Emotion recognition
  • Multimodal sarcasm recognition
  • Multitask learning
  • Sentiment analysis

Fingerprint

Dive into the research topics of 'A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations'. Together they form a unique fingerprint.

Cite this