TY - GEN
T1 - A Convolutional Block Attention Module and Multi-band Fusion Network for Embedded AR-SSVEP BCI Systems
AU - Zhang, Hao
AU - Sun, Ying
AU - Wang, Qiaoyi
AU - Ma, Kang
AU - Zhang, Shuailei
AU - Zhang, Feiyang
AU - Hu, Chun
AU - Zheng, Dezhi
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/12/29
Y1 - 2025/12/29
N2 - Brain-computer interfaces (BCI) based on augmented reality steady-state visual evoked potentials (AR-SSVEP) face critical challenges in mobile environments, including low signal-to-noise ratio (SNR) from dry electrodes and limited computational resources on mobile embedded platforms. To optimize the AR-SSVEP system performance, this study comprehensively considers the stimulus-response coupling mechanism integrating visual optical principles with deep learning-based classification. First, we designed an optimal AR visual stimulation configuration scheme capable of adaptively adjusting key parameters. Second, to address the time-varying non-stationary characteristics of SSVEP and inter-electrode quality variations in dry electrode systems, we propose CBAM-FNet-a lightweight SSVEP detection algorithm that incorporates the Convolutional Block Attention Module (CBAM) with multi-band fusion. The algorithm achieves classification accuracies of 93.84% on benchmark datasets and 74.43% on our self-collected AR-SSVEP dataset, representing performance improvements of up to 22.96% over state-of-the-art methods. Real-time implementation on an embedded unmanned vehicle platform demonstrates 65% control accuracy with an information transfer rate of 50.35 bits/min, validating the practical value of CBAM-FNet in embedded BCI applications and overcoming hardware-imposed performance limitations.
AB - Brain-computer interfaces (BCI) based on augmented reality steady-state visual evoked potentials (AR-SSVEP) face critical challenges in mobile environments, including low signal-to-noise ratio (SNR) from dry electrodes and limited computational resources on mobile embedded platforms. To optimize the AR-SSVEP system performance, this study comprehensively considers the stimulus-response coupling mechanism integrating visual optical principles with deep learning-based classification. First, we designed an optimal AR visual stimulation configuration scheme capable of adaptively adjusting key parameters. Second, to address the time-varying non-stationary characteristics of SSVEP and inter-electrode quality variations in dry electrode systems, we propose CBAM-FNet-a lightweight SSVEP detection algorithm that incorporates the Convolutional Block Attention Module (CBAM) with multi-band fusion. The algorithm achieves classification accuracies of 93.84% on benchmark datasets and 74.43% on our self-collected AR-SSVEP dataset, representing performance improvements of up to 22.96% over state-of-the-art methods. Real-time implementation on an embedded unmanned vehicle platform demonstrates 65% control accuracy with an information transfer rate of 50.35 bits/min, validating the practical value of CBAM-FNet in embedded BCI applications and overcoming hardware-imposed performance limitations.
KW - augmented reality
KW - brain-computer interface
KW - convolutional block attention module
KW - steady-state visual evoked potentials
UR - https://www.scopus.com/pages/publications/105027071511
U2 - 10.1145/3714394.3756273
DO - 10.1145/3714394.3756273
M3 - Conference contribution
AN - SCOPUS:105027071511
T3 - UbiComp Companion 2025 - Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing
SP - 1327
EP - 1333
BT - UbiComp Companion 2025 - Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing
A2 - Beigl, Michael
A2 - Jacucci, Giulio
A2 - Sigg, Stephan
A2 - Xiao, Yu
A2 - Bardram, Jakob E.
A2 - Tsiropoulou, Eirini Eleni
A2 - Xu, Chenren
PB - Association for Computing Machinery, Inc
T2 - 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp Companion 2025
Y2 - 12 October 2025 through 16 October 2025
ER -