Abstract
This study aimed to construct a novel multimodal model by fusing different electroencephalogram (EEG) data sources, which were under neutral, negative and positive audio stimulation, to discriminate between depressed patients and normal controls. The EEG data of different modalities were fused using a feature-level fusion technique to construct a depression recognition model. The EEG signals of 86 depressed patients and 92 normal controls were recorded simultaneously while receiving different audio stimuli. Then, from the EEG signals of each modality, linear and nonlinear features were extracted and selected to obtain features of each modality. In addition, a linear combination technique was used to fuse the EEG features of different modalities to build a global feature vector and find several powerful features. Furthermore, genetic algorithms were used to perform feature weighting to improve the overall performance of the recognition framework. The classification accuracy of each classifier, namely the k-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM), was compared, and the results were encouraging. The highest classification accuracy of 86.98% was obtained by the KNN classifier in the fusion of positive and negative audio stimuli, demonstrating that the fusion modality could achieve higher depression recognition accuracy rate compared with the individual modality schemes. This study may provide an additional tool for identifying depression patients.
Original language | English |
---|---|
Pages (from-to) | 127-138 |
Number of pages | 12 |
Journal | Information Fusion |
Volume | 59 |
DOIs | |
Publication status | Published - Jul 2020 |
Externally published | Yes |
Keywords
- Audio stimulus
- Depression recognition
- EEG
- Fusion
- Multimodal