TY - JOUR
T1 - From sMRI to task-fMRI
T2 - A unified geometric deep learning framework for cross-modal brain anatomo-functional mapping
AU - Zhu, Zhiyuan
AU - Huang, Taicheng
AU - Zhen, Zonglei
AU - Wang, Boyu
AU - Wu, Xia
AU - Li, Shuo
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2023/1
Y1 - 2023/1
N2 - Achieving predictions of brain functional activation patterns/task-fMRI maps from its underlying anatomy is an important yet challenging problem. Once successful, it will not only open up new ways to understand how brain anatomy influences functional organization of the brain, but also provide new technical support for the clinical use of anatomical information to guide the localization of cortical functional areas. However, due to the non-Euclidean complex architecture of brain anatomy and the inherent low signal-to-noise ratio (SNR) properties of fMRI signals, the key challenge in building such a cross-modal brain anatomo-functional mapping is how to effectively learn the context-aware information of brain anatomy and overcome the interference of noise-containing task-fMRI labels on the learning process. In this work, we propose a Unified Geometric Deep Learning framework (BrainUGDL) to perform the cross-modal brain anatomo-functional mapping task. Considering that both global and local structures of brain anatomy have an impact on brain functions from their respective perspectives, we innovatively propose the novel Global Graph Encoding (GGE) unit and Local Graph Attention (LGA) unit embedded into two parallel branches, focusing on learning the high-level global and local context information, respectively. Specifically, GGE learns the global context information of each mesh vertex by building and encoding global interactions, and LGA learns the local context information of each mesh vertex by selectively aggregating patch structure enhanced features from its spatial neighbors. The information learnt from the two branches is then fused to form a comprehensive representation of brain anatomical features for final brain function predictions. To address the inevitable measurement noise in task-fMRI labels, we further elaborate a novel uncertainty-filtered learning mechanism, which enables BrainUGDL to realize revised learning from the noise-containing labels through the estimated uncertainty. Experiments across seven open task-fMRI datasets from human connectome project (HCP) demonstrate the superiority of BrainUGDL. To our best knowledge, our proposed BrainUGDL is the first to achieve the prediction of individual task-fMRI maps solely based on brain sMRI data.
AB - Achieving predictions of brain functional activation patterns/task-fMRI maps from its underlying anatomy is an important yet challenging problem. Once successful, it will not only open up new ways to understand how brain anatomy influences functional organization of the brain, but also provide new technical support for the clinical use of anatomical information to guide the localization of cortical functional areas. However, due to the non-Euclidean complex architecture of brain anatomy and the inherent low signal-to-noise ratio (SNR) properties of fMRI signals, the key challenge in building such a cross-modal brain anatomo-functional mapping is how to effectively learn the context-aware information of brain anatomy and overcome the interference of noise-containing task-fMRI labels on the learning process. In this work, we propose a Unified Geometric Deep Learning framework (BrainUGDL) to perform the cross-modal brain anatomo-functional mapping task. Considering that both global and local structures of brain anatomy have an impact on brain functions from their respective perspectives, we innovatively propose the novel Global Graph Encoding (GGE) unit and Local Graph Attention (LGA) unit embedded into two parallel branches, focusing on learning the high-level global and local context information, respectively. Specifically, GGE learns the global context information of each mesh vertex by building and encoding global interactions, and LGA learns the local context information of each mesh vertex by selectively aggregating patch structure enhanced features from its spatial neighbors. The information learnt from the two branches is then fused to form a comprehensive representation of brain anatomical features for final brain function predictions. To address the inevitable measurement noise in task-fMRI labels, we further elaborate a novel uncertainty-filtered learning mechanism, which enables BrainUGDL to realize revised learning from the noise-containing labels through the estimated uncertainty. Experiments across seven open task-fMRI datasets from human connectome project (HCP) demonstrate the superiority of BrainUGDL. To our best knowledge, our proposed BrainUGDL is the first to achieve the prediction of individual task-fMRI maps solely based on brain sMRI data.
KW - Brain anatomo-functional mapping
KW - Geometric deep learning
KW - Task-fMRI
KW - sMRI
UR - http://www.scopus.com/inward/record.url?scp=85143645591&partnerID=8YFLogxK
U2 - 10.1016/j.media.2022.102681
DO - 10.1016/j.media.2022.102681
M3 - Article
C2 - 36459804
AN - SCOPUS:85143645591
SN - 1361-8415
VL - 83
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 102681
ER -