Contrastive Hierarchical Augmentation Learning for Modeling Cognitive and Multimodal Brain Network

Gen Shi, Yuxiang Yao, Yifan Zhu, Xinyue Lin, Lanxin Ji, Wenjin Liu, Xuesong Li

科研成果: 期刊稿件文章同行评审

摘要

Brain networks generated by functional magnetic resonance imaging (fMRI) have shown promising performance in characterizing cerebral social cognition and disorders. However, the scarcity of labeled data has hindered the application of deep graph learning in brain network analysis, accelerating the usage of extra label-free self-supervised contrastive graph learning. However, existing augmentation strategies commonly used in contrastive learning (CL), such as edge and node drop, do not fully benefit brain network learning due to the distribution differences between different modalities of brain neural imaging. To this end, we introduce a novel approach namely spatial–temporal hierarchical augmentation-based contrastive learning (ST-HACL) to enhance the representation learning of functional brain networks. ST-HACL leverages augmentation methods tailored specifically to brain networks. Our method employs an augmentation strategy from both spatial and temporal level during the brain network construction process to generate contrastive samples, enabling label-free self-supervised learning. We evaluate the performance of our approach on the orthostatic hypotension dataset (OH) and the Alzheimer’s disease neuroimaging initiative dataset (ADNI). Results demonstrate that our model surpasses existing graph neural network (GNN) models and graph-CL methods, achieving F1 scores of 80.61% in OH and 73.01% in ADNI. To the best of our knowledge, our study represents the first attempt at applying brain network-specific contrastive augmentation learning to fMRI analysis.

源语言英语
页(从-至)1-11
页数11
期刊IEEE Transactions on Computational Social Systems
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Contrastive Hierarchical Augmentation Learning for Modeling Cognitive and Multimodal Brain Network' 的科研主题。它们共同构成独一无二的指纹。

引用此