SAE+LSTM: A new framework for emotion recognition from multi-channel EEG

Xiaofen Xing, Zhenqi Li, Tianyuan Xu, Lin Shu*, Bin Hu, Xiangmin Xu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

225 引用 (Scopus)

摘要

EEG-based automatic emotion recognition can help brain-inspired robots in improving their interactions with humans. This paper presents a novel framework for emotion recognition using multi-channel electroencephalogram (EEG). The framework consists of a linear EEG mixing model and an emotion timing model. Our proposed framework considerably decomposes the EEG source signals from the collected EEG signals and improves classification accuracy by using the context correlations of the EEG feature sequences. Specially, Stack AutoEncoder (SAE) is used to build and solve the linear EEG mixing model and the emotion timing model is based on the Long Short-Term Memory Recurrent Neural Network (LSTM-RNN). The framework was implemented on the DEAP dataset for an emotion recognition experiment, where the mean accuracy of emotion recognition achieved 81.10% in valence and 74.38% in arousal, and the effectiveness of our framework was verified. Our framework exhibited a better performance in emotion recognition using multi-channel EEG than the compared conventional approaches in the experiments.

源语言英语
文章编号37
期刊Frontiers in Neurorobotics
13
DOI
出版状态已出版 - 2019
已对外发布

指纹

探究 'SAE+LSTM: A new framework for emotion recognition from multi-channel EEG' 的科研主题。它们共同构成独一无二的指纹。

引用此