Feature-level fusion approaches based on multimodal EEG data for depression recognition

Hanshu Cai, Zhidiao Qu, Zhe Li, Yi Zhang, Xiping Hu*, Bin Hu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

224 引用 (Scopus)

摘要

This study aimed to construct a novel multimodal model by fusing different electroencephalogram (EEG) data sources, which were under neutral, negative and positive audio stimulation, to discriminate between depressed patients and normal controls. The EEG data of different modalities were fused using a feature-level fusion technique to construct a depression recognition model. The EEG signals of 86 depressed patients and 92 normal controls were recorded simultaneously while receiving different audio stimuli. Then, from the EEG signals of each modality, linear and nonlinear features were extracted and selected to obtain features of each modality. In addition, a linear combination technique was used to fuse the EEG features of different modalities to build a global feature vector and find several powerful features. Furthermore, genetic algorithms were used to perform feature weighting to improve the overall performance of the recognition framework. The classification accuracy of each classifier, namely the k-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM), was compared, and the results were encouraging. The highest classification accuracy of 86.98% was obtained by the KNN classifier in the fusion of positive and negative audio stimuli, demonstrating that the fusion modality could achieve higher depression recognition accuracy rate compared with the individual modality schemes. This study may provide an additional tool for identifying depression patients.

源语言英语
页(从-至)127-138
页数12
期刊Information Fusion
59
DOI
出版状态已出版 - 7月 2020
已对外发布

指纹

探究 'Feature-level fusion approaches based on multimodal EEG data for depression recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此