An attention-based multi-modal MRI fusion model for major depressive disorder diagnosis

Guowei Zheng, Weihao Zheng*, Yu Zhang, Junyu Wang, Miao Chen, Yin Wang, Tianhong Cai, Zhijun Yao*, Bin Hu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

5 引用 (Scopus)

摘要

Objective. Major depressive disorder (MDD) is one of the biggest threats to human mental health. MDD is characterized by aberrant changes in both structure and function of the brain. Although recent studies have developed some deep learning models based on multi-modal magnetic resonance imaging (MRI) for MDD diagnosis, the latent associations between deep features derived from different modalities were largely unexplored by previous studies, which we hypothesized may have potential benefits in improving the diagnostic accuracy of MDD. Approach. In this study, we proposed a novel deep learning model that fused both structural MRI (sMRI) and resting-state MRI (rs-fMRI) data to enhance the diagnosis of MDD by capturing the interactions between deep features extracted from different modalities. Specifically, we first employed a brain function encoder (BFE) and a brain structure encoder (BSE) to extract the deep features from fMRI and sMRI, respectively. Then, we designed a function and structure co-attention fusion (FSCF) module that captured inter-modal interactions and adaptively fused multi-modal deep features for MDD diagnosis. Main results. This model was evaluated on a large cohort and achieved a high classification accuracy of 75.2% for MDD diagnosis. Moreover, the attention distribution of the FSCF module assigned higher attention weights to structural features than functional features for diagnosing MDD. Significance. The high classification accuracy highlights the effectiveness and potential clinical of the proposed model.

源语言英语
文章编号066005
期刊Journal of Neural Engineering
20
6
DOI
出版状态已出版 - 1 12月 2023

指纹

探究 'An attention-based multi-modal MRI fusion model for major depressive disorder diagnosis' 的科研主题。它们共同构成独一无二的指纹。

引用此