Constraint-Driven Causal Representation Learning for Vigilance Robust Estimation in Brain–Computer Interface

  • Xuan Zhang
  • , Wang Zheng
  • , Zhigang Li
  • , Yi Yang
  • , Weijia Liu
  • , Hongxin Cai
  • , Junru Zhu
  • , Jingyu Liu*
  • , Bin Hu*
  • , Qunxi Dong*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Vigilance estimation is a critical task within the field of brain–computer interfaces, extensively applied in monitoring and optimizing user states during human–machine interaction using electroencephalography (EEG). However, most existing vigilance prediction frameworks are prone to spurious correlations stemming from inherent biases in collected data. These biases involve relevant but vigilance-independent information, which may lack robustness when applied to different data distributions, i.e., out-of-distribution (OOD) scenarios. The core idea of this study is to learn constraints that capture causal information from the input based on the assumed underlying data generating process. Leveraging the disentanglement and invariance principles behind the assumptions, we propose a constraint-driven causal representation learning (CCRL) to identify and separate spurious latent variables from biased training data for generalized vigilance estimation. The CCRL training process consists of two phases: self-supervised pretraining and constraint-driven causal information disentanglement. In the first phase, based on the masked autoencoder (MAE) architecture, unlabeled training data are used for reconstructing pretext tasks to capture the comprehensive and intrinsic contextual information from EEG data, which provides a powerful input for downstream disentanglement learning. In the second phase, we propose a novel disentanglement strategy to learn spurious-free latent representations causally related to the vigilance state driven by adversarial and invariance constraints. Comprehensive validation experiments conducted on two well-known public datasets demonstrate the effectiveness and superiority of the proposed framework. In general, this work has promising implications for addressing OOD challenges in vigilance estimation.

Original languageEnglish
Pages (from-to)20328-20342
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume36
Issue number12
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • Brain–computer interface
  • causal inference
  • disentangled representation learning
  • neural network
  • vigilance estimation

Fingerprint

Dive into the research topics of 'Constraint-Driven Causal Representation Learning for Vigilance Robust Estimation in Brain–Computer Interface'. Together they form a unique fingerprint.

Cite this