Abstract
In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression.
Original language | English |
---|---|
Article number | 8653893 |
Pages (from-to) | 28196-28210 |
Number of pages | 15 |
Journal | IEEE Access |
Volume | 7 |
DOIs | |
Publication status | Published - 2019 |
Externally published | Yes |
Keywords
- EEG
- classification
- eye movement
- mild depression
- multimodal deep learning
- network