Emotion Classification with EEG Responses Evoked by Emotional Prosody of Speech

Zechen Zhang, Xihong Wu, Jing Chen

Research output: Contribution to journalConference articlepeer-review

Abstract

Emotion classification with EEG responses can be used in human-computer interaction, security, medical treatment, etc. Neural responses recorded via EEG can reflect more direct and objective emotional information than other behavioral signals (i.e., facial expression...). In most previous studies, only features of EEG were used as input for machine learning models. In this work, we assumed that the emotional features included in speech stimuli could assist in emotion recognition with EEG when the emotion is evoked by the emotional prosody of speech. An EEG data corpus was collected with specific speech stimuli, in which emotion was represented with only speech prosody and without semantic context. A novel EEG-Prosody CRNN model was proposed to classify four types of typical emotions. The classification accuracy can achieve at 82.85% when the prosody features of speech were integrated as input, which outperformed most audio-evoked EEG-based emotion classification methods.

Original languageEnglish
Pages (from-to)4254-4258
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2023-August
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023

Keywords

  • EEG
  • emotion classification
  • emotional prosody
  • multi-modal learning

Fingerprint

Dive into the research topics of 'Emotion Classification with EEG Responses Evoked by Emotional Prosody of Speech'. Together they form a unique fingerprint.

Cite this