A Multimodal Emotion Perception Model based on Context-Aware Decision-Level Fusion

Yishan Chen, Zhiyang Jia, Kaoru Hirota, Yaping Dai*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

A Multimodal Emotion Perception model with Audio and Visual modalities (MEP-A V) is proposed to detect the individual emotions in public area. The framework of MEP-AV model consists of four parts, i.e., data collection module, audio expression analysis module, visual expression analysis module and multimodal fusion module. In order to ensure that the emotion perception results meet the requirement of short-term continuity, a Context-Aware Decision-Level Fusion (CADLF) model is proposed and applied in multimodal fusion module. The CADLF model estimates the affective status by using context information of multimodal emotion. The short-term continuity is considered to improve the accuracy of the emotion perception results. The experiment results evaluated by various metrics demonstrate that the performance of the multimodal structure is improved compared with that of unimodal emotion classifiers. The MEP-AV model using multimodal fusion algorithm provides the accuracies of 70.89% and 77.07% in valence and arousal respectively. The Fl-scores reaches 70.2% and 75.6% respectively, indicating the boost performance on emotion perception.

Original languageEnglish
Title of host publicationProceedings of the 41st Chinese Control Conference, CCC 2022
EditorsZhijun Li, Jian Sun
PublisherIEEE Computer Society
Pages7332-7337
Number of pages6
ISBN (Electronic)9789887581536
DOIs
Publication statusPublished - 2022
Event41st Chinese Control Conference, CCC 2022 - Hefei, China
Duration: 25 Jul 202227 Jul 2022

Publication series

NameChinese Control Conference, CCC
Volume2022-July
ISSN (Print)1934-1768
ISSN (Electronic)2161-2927

Conference

Conference41st Chinese Control Conference, CCC 2022
Country/TerritoryChina
CityHefei
Period25/07/2227/07/22

Keywords

  • Context-aware
  • Decision-level Fusion
  • Emotion Perception

Fingerprint

Dive into the research topics of 'A Multimodal Emotion Perception Model based on Context-Aware Decision-Level Fusion'. Together they form a unique fingerprint.

Cite this

Chen, Y., Jia, Z., Hirota, K., & Dai, Y. (2022). A Multimodal Emotion Perception Model based on Context-Aware Decision-Level Fusion. In Z. Li, & J. Sun (Eds.), Proceedings of the 41st Chinese Control Conference, CCC 2022 (pp. 7332-7337). (Chinese Control Conference, CCC; Vol. 2022-July). IEEE Computer Society. https://doi.org/10.23919/CCC55666.2022.9902799