VAE-CapsNet: A common emotion information extractor for cross-subject emotion recognition

Huayu Chen, Junxiang Li, Huanhuan He, Shuting Sun, Jing Zhu, Xiaowei Li*, Bin Hu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Owing to the uniqueness of brain structure, function, and emotional experiences, neural activity patterns differ among subjects. As a result, affective brain–computer interfaces (aBCIs) must account for individual differences in neural activity, electroencephalogram data, and particularly emotion pattern (EP). These differences in emotion information types and distribution patterns, such as session EP differences (SEPD) and individual EP differences (IEPD), pose notable challenges for cross-subject and cross-session emotion classification. To address these challenges, we propose a novel common emotion information extraction framework VAE-CapsNet that combines a variational autoencoder (VAE) and capsule network (CapsNet). A VAE-based unsupervised EP transformation module is used to mitigate SEPD, while five segmental activation functions are introduced to match EPs across different subjects. The CapsNet-based information extractor efficiently handles various emotion information, producing universal emotional features from different sessions. We validated the performance of the VAE-CapsNet framework through cross-session, cross-subject, and cross-dataset experiments on the SEED, SEED-IV, SEED-V, and FACED datasets.

Original languageEnglish
Article number113018
JournalKnowledge-Based Systems
Volume311
DOIs
Publication statusPublished - 28 Feb 2025

Keywords

  • Affective computing
  • Brain-computer interface(BCI)
  • Cross-dataset
  • Cross-session
  • Cross-subject
  • Electroencephalogram(EEG)
  • Emotion recognition
  • Subject-dependent

Fingerprint

Dive into the research topics of 'VAE-CapsNet: A common emotion information extractor for cross-subject emotion recognition'. Together they form a unique fingerprint.

Cite this