Emotion recognition based on human gesture and speech information using RT middleware

H. A. Vu*, Y. Yamazaki, F. Dong, K. Hirota

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

23 Citations (Scopus)

Abstract

A bi-modal emotion recognition approach is proposed for recognition of four emotions that integrate information from gestures and speech. The outputs from two uni-modal emotion recognition systems based on affective speech and expressive gesture are fused on a decision level fusion by using weight criterion fusion and best probability plus majority vote fusion methods, and the performance of classifier which performs better than each uni-modal and is helpful in recognizing suitable emotions for communication situations. To validate the proposal, fifty Japanese words (or phrases) and 8 types of gestures that are recorded from five participants are used, and the emotion recognition rate increases up to 85.39%. The proposal is able to extent to using more than other modalities and useful in automatic emotion recognition system for human-robot communication.

Original languageEnglish
Title of host publicationFUZZ 2011 - 2011 IEEE International Conference on Fuzzy Systems - Proceedings
Pages787-791
Number of pages5
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event2011 IEEE International Conference on Fuzzy Systems, FUZZ 2011 - Taipei, Taiwan, Province of China
Duration: 27 Jun 201130 Jun 2011

Publication series

NameIEEE International Conference on Fuzzy Systems
ISSN (Print)1098-7584

Conference

Conference2011 IEEE International Conference on Fuzzy Systems, FUZZ 2011
Country/TerritoryTaiwan, Province of China
CityTaipei
Period27/06/1130/06/11

Keywords

  • Affective Speech
  • Decision-level Fusion
  • Emotion Recognition
  • Expressive Gesture

Fingerprint

Dive into the research topics of 'Emotion recognition based on human gesture and speech information using RT middleware'. Together they form a unique fingerprint.

Cite this