TY - JOUR
T1 - VAE-CapsNet
T2 - A common emotion information extractor for cross-subject emotion recognition
AU - Chen, Huayu
AU - Li, Junxiang
AU - He, Huanhuan
AU - Sun, Shuting
AU - Zhu, Jing
AU - Li, Xiaowei
AU - Hu, Bin
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/2/28
Y1 - 2025/2/28
N2 - Owing to the uniqueness of brain structure, function, and emotional experiences, neural activity patterns differ among subjects. As a result, affective brain–computer interfaces (aBCIs) must account for individual differences in neural activity, electroencephalogram data, and particularly emotion pattern (EP). These differences in emotion information types and distribution patterns, such as session EP differences (SEPD) and individual EP differences (IEPD), pose notable challenges for cross-subject and cross-session emotion classification. To address these challenges, we propose a novel common emotion information extraction framework VAE-CapsNet that combines a variational autoencoder (VAE) and capsule network (CapsNet). A VAE-based unsupervised EP transformation module is used to mitigate SEPD, while five segmental activation functions are introduced to match EPs across different subjects. The CapsNet-based information extractor efficiently handles various emotion information, producing universal emotional features from different sessions. We validated the performance of the VAE-CapsNet framework through cross-session, cross-subject, and cross-dataset experiments on the SEED, SEED-IV, SEED-V, and FACED datasets.
AB - Owing to the uniqueness of brain structure, function, and emotional experiences, neural activity patterns differ among subjects. As a result, affective brain–computer interfaces (aBCIs) must account for individual differences in neural activity, electroencephalogram data, and particularly emotion pattern (EP). These differences in emotion information types and distribution patterns, such as session EP differences (SEPD) and individual EP differences (IEPD), pose notable challenges for cross-subject and cross-session emotion classification. To address these challenges, we propose a novel common emotion information extraction framework VAE-CapsNet that combines a variational autoencoder (VAE) and capsule network (CapsNet). A VAE-based unsupervised EP transformation module is used to mitigate SEPD, while five segmental activation functions are introduced to match EPs across different subjects. The CapsNet-based information extractor efficiently handles various emotion information, producing universal emotional features from different sessions. We validated the performance of the VAE-CapsNet framework through cross-session, cross-subject, and cross-dataset experiments on the SEED, SEED-IV, SEED-V, and FACED datasets.
KW - Affective computing
KW - Brain-computer interface(BCI)
KW - Cross-dataset
KW - Cross-session
KW - Cross-subject
KW - Electroencephalogram(EEG)
KW - Emotion recognition
KW - Subject-dependent
UR - http://www.scopus.com/inward/record.url?scp=85216286462&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2025.113018
DO - 10.1016/j.knosys.2025.113018
M3 - Article
AN - SCOPUS:85216286462
SN - 0950-7051
VL - 311
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 113018
ER -