Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis

Yuan Zong, Wenming Zheng*, Xiaohua Huang, Keyu Yan, Jingwei Yan, Tong Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)

Abstract

Recently, emotion recognition in the wild has been attracted in computer vision and affective computing. In contrast to classical emotion recognition, emotion recognition in the wild becomes more challenging since the databases are collected under real scenarios. In such databases, there would inevitably be various adverse samples, whose emotion labels are considerably hard to be identified using many ideal databases based classical emotion recognition methods. Therefore, it significantly increases the difficulty of emotion recognition task based on the wild databases. In this paper, we propose to use a transductive transfer learning framework to handle the problem of emotion recognition in the wild. We develop a sparse transductive transfer linear discriminant analysis (STTLDA) for facial expression recognition and speech emotion recognition under real-world environments, respectively. As far as we know, the novelty of our method is that we are the first to consider emotion recognition in the wild as a transfer learning problem and use the transductive transfer learning method to eliminate the distribution difference between training and testing samples caused by the “wild”. We conduct extensive experiments on SFEW 2.0, AFEW 4.0 and 5.0 (audio part) databases, which were used in Emotion Recognition in the Wild Challenge (EmotiW 2014 and 2015) to evaluate our proposed method. Experimental results demonstrate that our proposed STTLDA achieves a satisfactory performance compared with the baseline provided by the challenge organizers and some competitive methods. In addition, we report our previous results in static image based facial expression recognition challenge of EmotiW 2015. In this competition, we achieve an accuracy of 50 % on the Test set and this result has a 10.87 % improvement compared with the baseline released by challenge organizers.

Original languageEnglish
Pages (from-to)163-172
Number of pages10
JournalJournal on Multimodal User Interfaces
Volume10
Issue number2
DOIs
Publication statusPublished - 1 Jun 2016
Externally publishedYes

Keywords

  • Domain adaptation
  • Emotion recognition in the wild
  • Facial expression recognition
  • Sparse transductive transfer linear discriminant analysis
  • Speech emotion recognition
  • Transfer learning

Fingerprint

Dive into the research topics of 'Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis'. Together they form a unique fingerprint.

Cite this