Two-layer fuzzy multiple random forest for speech emotion recognition in human-robot interaction

Luefeng Chen, Wanjuan Su, Yu Feng, Min Wu*, Jinhua She, Kaoru Hirota

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

136 Citations (Scopus)

Abstract

The two-layer fuzzy multiple random forest (TLFMRF) is proposed for speech emotion recognition. When recognizing speech emotion, there are usually some problems. One is that feature extraction relies on personalized features. The other is that emotion recognition doesn't consider the differences among different categories of people. In the proposal, personalized and non-personalized features are fused for speech emotion recognition. High dimensional emotional features are divided into different subclasses by adopting the fuzzy C-means clustering algorithm, and multiple random forest is used to recognize different emotional states. Finally, a TLFMRF is established. Moreover, a separate classification of certain emotions which are difficult to recognize to some extent is conducted. The results show that the TLFMRF can identify emotions in a stable manner. To demonstrate the effectiveness of the proposal, experiments on CASIA corpus and Berlin EmoDB are conducted. Experimental results show the recognition accuracies of the proposal are 1.39%–7.64% and 4.06%–4.30% higher than that of back propagation neural network and random forest respectively. Meanwhile, preliminary application experiments are also conducted to investigate the emotional social robot system, and application results indicate that mobile robot can real-time track six basic emotions, including angry, fear, happy, neutral, sad, and surprise.

Original languageEnglish
Pages (from-to)150-163
Number of pages14
JournalInformation Sciences
Volume509
DOIs
Publication statusPublished - Jan 2020

Keywords

  • Fuzzy C-means
  • Human-robot interaction
  • Multiple random forest
  • Speech emotion recognition

Fingerprint

Dive into the research topics of 'Two-layer fuzzy multiple random forest for speech emotion recognition in human-robot interaction'. Together they form a unique fingerprint.

Cite this