Graph pre-trained framework with spatio-temporal importance masking and fine-grained optimizing for neural decoding

Ziyu Li, Zhiyuan Zhu, Qing Li, Xia Wu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Neural decoding has always been the cutting-edge neuroscience issue, significant progress has been made in neural decoding with the support of deep learning technology. However, these breakthroughs are based on large-scale fully annotated functional magnetic resonance imaging (fMRI) data, which greatly hinders its further applicability. Recently, foundation models have garnered considerable attention in the realm of natural language processing, computer vision, and multimodal data processing due to their ability to circumvent the need for extensive annotated datasets while achieving notable accuracy gains. Nevertheless, the formulation of effective foundation model approaches tailored for connectivity-based complex spatio-temporal brain networks remains an unresolved challenge. To address these issues, in this paper, we proposed a general Temporal-Aware Graph Self-supervised Contrastive learning framework (TAGSC) for fMRI-based neural decoding. Concretely, it includes three innovative improvements to enhance fMRI-based graph foundation models: (i) a spatio-temporal augmentation strategy considers spatial brain region synergy and temporal information continuity to generate brain spatio-temporal contrastive views; (ii) a temporal-aware feature extractor learns brain spatio-temporal representations, which fully takes into account the continuous consistency of brain state transitions and fetches brain spatio-temporal interaction information from local to global; (iii) a fine-grained consistency loss assists in contrastive optimization from both temporal and spatial perspectives. Extensive evaluation on publicly available fMRI datasets demonstrated the superior performance of the proposed TAGSC and revealed biomarkers related to different states of the brain. To the best of our knowledge, it is among the earliest attempts to employ a spatio-temporal pre-trained model for neural decoding.

Original languageEnglish
Article number112006
JournalPattern Recognition
Volume170
DOIs
Publication statusPublished - Feb 2026
Externally publishedYes

Keywords

  • Graph self-supervised learning
  • Neural decoding
  • Spatio-temporal
  • Temporal-aware

Cite this