TY - JOUR
T1 - Temporal Self-Attentional and Adaptive Graph Convolutional Mixed Model for Sleep Staging
AU - Chen, Ziyang
AU - Shi, Wenbin
AU - Zhang, Xianchao
AU - Yeh, Chien Hung
N1 - Publisher Copyright:
© 2001-2012 IEEE.
PY - 2024/4/15
Y1 - 2024/4/15
N2 - Evaluating sleep quality through reliable sleep staging is of paramount importance. Although many studies reached fair performances in sleep stage classification, effectively leveraging the spatial-temporal characteristics derived from multichannel brain recordings remains challenging. We develop a novel temporal self-attentional and adaptive graph convolutional mixed model (TS-AGCMM), comprising a feature extraction module (FEM), dynamic time warping (DTW)-based attention module, temporal context module (TCM), and adaptive graph convolutional module (AGCM) in this study. First, the FEM enables capturing representative information from raw data. Then, the DTW-based attention module utilizes a dynamic programming algorithm to enhance the spatial information expression ability of extracted features. The TCM includes multihead attention mechanisms that effectively capture temporal dependencies. In particular, we employ an attention module named normalization-based attention module (NAM), which utilizes the contributing factors of weights to suppress less salient information. Meanwhile, the AGCM can obtain optimal spatial functional connections between polysomnography (PSG) channels, which benefit from the adaptive learning property of the adjacency matrix. Finally, we fuse the temporal and spatial features by concat operation to obtain the prediction results. We utilize the Montreal archive of sleep studies (MASS) and ISRUC-S3 to assess TS-AGCMM. The TS-AGCMM exhibits performance comparable to other currently available approaches as per our results, achieving an accuracy of 89.1% and 81.2%, a macroaveraging F1-score of 84.7% and 79.5%, as well as a Cohen's kappa coefficient of 83.9% and 75.8% on the two databases, respectively.
AB - Evaluating sleep quality through reliable sleep staging is of paramount importance. Although many studies reached fair performances in sleep stage classification, effectively leveraging the spatial-temporal characteristics derived from multichannel brain recordings remains challenging. We develop a novel temporal self-attentional and adaptive graph convolutional mixed model (TS-AGCMM), comprising a feature extraction module (FEM), dynamic time warping (DTW)-based attention module, temporal context module (TCM), and adaptive graph convolutional module (AGCM) in this study. First, the FEM enables capturing representative information from raw data. Then, the DTW-based attention module utilizes a dynamic programming algorithm to enhance the spatial information expression ability of extracted features. The TCM includes multihead attention mechanisms that effectively capture temporal dependencies. In particular, we employ an attention module named normalization-based attention module (NAM), which utilizes the contributing factors of weights to suppress less salient information. Meanwhile, the AGCM can obtain optimal spatial functional connections between polysomnography (PSG) channels, which benefit from the adaptive learning property of the adjacency matrix. Finally, we fuse the temporal and spatial features by concat operation to obtain the prediction results. We utilize the Montreal archive of sleep studies (MASS) and ISRUC-S3 to assess TS-AGCMM. The TS-AGCMM exhibits performance comparable to other currently available approaches as per our results, achieving an accuracy of 89.1% and 81.2%, a macroaveraging F1-score of 84.7% and 79.5%, as well as a Cohen's kappa coefficient of 83.9% and 75.8% on the two databases, respectively.
KW - Adaptive graph convolutional
KW - multihead attention (MHA)
KW - sleep stage classification
KW - temporal context module (TCM)
UR - http://www.scopus.com/inward/record.url?scp=85187365608&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2024.3371456
DO - 10.1109/JSEN.2024.3371456
M3 - Article
AN - SCOPUS:85187365608
SN - 1530-437X
VL - 24
SP - 12840
EP - 12852
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
IS - 8
ER -