摘要
Although multimodal physiological data from the central and peripheral nervous systems can objectively respond to human emotional states, the individual differences caused by non-stationary and low signal-to-noise properties bring several challenges to cross-subject emotion recognition tasks. Many previous studies usually focused on learning high correlation information between different modalities, which easily leads to incomplete descriptions of different physiological signals and difficulties in aligning critical emotional information. To tackle these challenges, this paper proposes a novel multimodal emotion recognition model for improving the generalization performance to unseen target domain subjects, termed Completeness-induced Adaptative Broad Learning (CiABL). The proposed CiABL can gradely explore the completeness modality representation that encompasses both modality-relevant and modality-independent information, avoiding the loss of performance due to spurious correlations from different modalities. Subsequently, a well-designed weighted representation distribution alignment mechanism of CiABL can appropriately align the marginal and conditional distributions to reduce the influences of individual differences greatly. Extensive experiments on the SEED and SEED-FRA datasets demonstrate the effectiveness and generalization of the proposed CiABL, which outperforms current state-of-the-art methods. In addition, CiABL can precisely quantify the importance of global features to properly explain the modality contribution and averaged activation patterns of the brain under cross-subject emotion recognition tasks.
源语言 | 英语 |
---|---|
页(从-至) | 1-15 |
页数 | 15 |
期刊 | IEEE Transactions on Affective Computing |
DOI | |
出版状态 | 已接受/待刊 - 2024 |
已对外发布 | 是 |