Semantic Disentangling for Audiovisual Induced Emotion

Qunxi Dong, Wang Zheng, Fuze Tian, Lixian Zhu, Kun Qian, Jingyu Liu*, Xuan Zhang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Emotions regulation play an important role in human behavior, but exhibit considerable heterogeneity among individuals, which attenuates the generalization ability of emotion models. In this work, we aim to achieve robust emotion prediction through efficient disentanglement of affective semantic representations. In detail, the data generation mechanism behind observations from different perspectives is causally set, where latent variables that relate to emotion are explicitly separate into three parts: the intrinsic-related part, the extrinsic-related part, and the spurious-related part. Affective semantic features consist of the first two parts, with the understanding that spurious latent variables generate the inherent biases in the data. Furthermore, a variational autoencoder with a reformulated objective function is proposed to learn such disentangled latent variables, and only adopts semantic representations to perform the final classification task, avoiding the interference of spurious variables. In addition, for electroencephalography (EEG) data used in this article, a space-frequency mapping method is introduced to improve information utilization. Comprehensive experiments on popular emotion datasets show that the proposed method can achieve competitive intersubject generalization performance. Our results highlight the potential of efficient latent representation disentanglement in addressing the complexity challenges of emotion recognition.

源语言英语
期刊IEEE Transactions on Computational Social Systems
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Semantic Disentangling for Audiovisual Induced Emotion' 的科研主题。它们共同构成独一无二的指纹。

引用此