跳到主要导航 跳到搜索 跳到主要内容

Deep Residual Network with D-S Evidence Theory for Bimodal Emotion Recognition

  • Yulong Liu
  • , Luefeng Chen*
  • , Min Li
  • , Min Wu
  • , Witold Pedrycz
  • , Kaoru Hirota
  • *此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

In this paper, the Deep Residual Network (ResNet) with Dempster-Shafer (D-S) evidence theory is presented for bimodal emotion recognition through applying facial expression and speech emotion information. By acquiring discriminative emotion features and performing bimodal fusion of emotions, this method can overcome the limitations of single modal emotion recognition and obtain higher recognition accuracy. The key areas of emotional features and spectrograms are firstly used to acquire low-level characteristics of emotion. Moreover, two ResNets are designed to select high-level emotion semantic features. Furthermore, under the structure of D-S evidence theory, the output probability values are used for achieving emotion fusion to improve the effectiveness of bimodal emotion recognition. The experimental studies on the eNTERFACE'05 database demonstrate a recognition accuracy of 88.67%, which is a noteworthy improvement of 23.11% and 9.32% compared to an individual mode of facial expressions and speech, respectively.

源语言英语
主期刊名Proceeding - 2021 China Automation Congress, CAC 2021
出版商Institute of Electrical and Electronics Engineers Inc.
4674-4679
页数6
ISBN(电子版)9781665426473
DOI
出版状态已出版 - 2021
已对外发布
活动2021 China Automation Congress, CAC 2021 - Beijing, 中国
期限: 22 10月 202124 10月 2021

出版系列

姓名Proceeding - 2021 China Automation Congress, CAC 2021

会议

会议2021 China Automation Congress, CAC 2021
国家/地区中国
Beijing
时期22/10/2124/10/21

指纹

探究 'Deep Residual Network with D-S Evidence Theory for Bimodal Emotion Recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此