Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human–Robot Interaction

Luefeng Chen, Min Li, Min Wu, Witold Pedrycz, Kaoru Hirota

科研成果: 期刊稿件文章同行评审

11 引用 (Scopus)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 11
  • Captures
    • Readers: 28
  • Mentions
    • Blog Mentions: 1
    • News Mentions: 1
see details

摘要

A coupled multimodal emotional feature analysis (CMEFA) method based on broad&#x2013;deep fusion networks, which divide multimodal emotion recognition into two layers, is proposed. First, facial emotional features and gesture emotional features are extracted using the broad and deep learning fusion network (BDFN). Considering that the bi-modal emotion is not completely independent of each other, canonical correlation analysis (CCA) is used to analyze and extract the correlation between the emotion features, and a coupling network is established for emotion recognition of the extracted bi-modal features. Both simulation and application experiments are completed. According to the simulation experiments completed on the bimodal face and body gesture database (FABO), the recognition rate of the proposed method has increased by 1.15% compared to that of the support vector machine recursive feature elimination (SVMRFE) (without considering the unbalanced contribution of features). Moreover, by using the proposed method, the multimodal recognition rate is 21.22%, 2.65%, 1.61%, 1.54%, and 0.20% higher than those of the fuzzy deep neural network with sparse autoencoder (FDNNSA), ResNet-101 <inline-formula> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> GFK, C3D <inline-formula> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> MCB <inline-formula> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> DBN, the hierarchical classification fusion strategy (HCFS), and cross-channel convolutional neural network (CCCNN), respectively. In addition, preliminary application experiments are carried out on our developed emotional social robot system, where emotional robot recognizes the emotions of eight volunteers based on their facial expressions and body gestures.

源语言英语
页(从-至)1-11
页数11
期刊IEEE Transactions on Neural Networks and Learning Systems
DOI
出版状态已接受/待刊 - 2023
已对外发布

指纹

探究 'Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human&#x2013;Robot Interaction' 的科研主题。它们共同构成独一无二的指纹。

引用此

Chen, L., Li, M., Wu, M., Pedrycz, W., & Hirota, K. (已接受/印刷中). Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human&#x2013;Robot Interaction. IEEE Transactions on Neural Networks and Learning Systems, 1-11. https://doi.org/10.1109/TNNLS.2023.3236320