Deep adaptive fusion network with multimodal neuroimaging information for MDD diagnosis: an open data study

  • Tongtong Li
  • , Kai Li
  • , Ziyang Zhao
  • , Qi Sun
  • , Xinyan Zhang
  • , Zhijun Yao*
  • , Jiansong Zhou
  • , Bin Hu
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Neuroimaging offers powerful evidence for the automated diagnosis of major depressive disorder (MDD). However, discrepancies across imaging modalities hinder the exploration of cross-modal interactions and the effective integration of complementary features. To address this challenge, we propose a supervised Deep Adaptive Fusion Network (DAFN) that fully leverages the complementarity of multimodal neuroimaging information for the diagnosis of MDD. Specifically, high- and low-frequency features are extracted from the images using a customized convolutional neural network and multi-head self-attention encoders, respectively. A modality weight adaptation module dynamically adjusts the contribution of each modality during training, while a progressive information reinforcement training strategy reinforces multimodal fusion features. Finally, the performance of the DAFN is evaluated on both the open-access dataset and the recruited dataset. The results demonstrate that DAFN achieves competitive performance in multimodal neuroimaging fusion for the diagnosis of MDD. The source code is available at: https://github.com/TTLi1996/DAFN.

源语言英语
文章编号108151
期刊Neural Networks
194
DOI
出版状态已出版 - 2月 2026
已对外发布

指纹

探究 'Deep adaptive fusion network with multimodal neuroimaging information for MDD diagnosis: an open data study' 的科研主题。它们共同构成独一无二的指纹。

引用此