Abstract
Emotion classification with EEG responses can be used in human-computer interaction, security, medical treatment, etc. Neural responses recorded via EEG can reflect more direct and objective emotional information than other behavioral signals (i.e., facial expression...). In most previous studies, only features of EEG were used as input for machine learning models. In this work, we assumed that the emotional features included in speech stimuli could assist in emotion recognition with EEG when the emotion is evoked by the emotional prosody of speech. An EEG data corpus was collected with specific speech stimuli, in which emotion was represented with only speech prosody and without semantic context. A novel EEG-Prosody CRNN model was proposed to classify four types of typical emotions. The classification accuracy can achieve at 82.85% when the prosody features of speech were integrated as input, which outperformed most audio-evoked EEG-based emotion classification methods.
Original language | English |
---|---|
Pages (from-to) | 4254-4258 |
Number of pages | 5 |
Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
Volume | 2023-August |
DOIs | |
Publication status | Published - 2023 |
Externally published | Yes |
Event | 24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland Duration: 20 Aug 2023 → 24 Aug 2023 |
Keywords
- EEG
- emotion classification
- emotional prosody
- multi-modal learning