Cross-Domain Facial Expression Recognition Based on Transductive Deep Transfer Learning

Keyu Yan, Wenming Zheng*, Tong Zhang, Yuan Zong, Chuangao Tang, Cheng Lu, Zhen Cui

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

25 Citations (Scopus)

Abstract

In this paper, we proposed a novel end-to-end transductive deep transfer learning network (TDTLN) to deal with the challenging cross-domain expression recognition problem, in which both the source and target databases are utilized to jointly learn optimal nonlinear discriminative features so as to improve the label prediction performance of the target data samples. As part of the network parameters, the labels of the target samples are also optimized when optimizing the parameters of TDTLN, such that the cross-entropy loss of source domain data and the regression loss of target domain data can be simultaneously calculated. Finally, to evaluate the recognition performance of the proposed TDTLN method, we conduct extensive cross-database experiments on four commonly used multi-view facial expression databases, namely the BU-3DEF, Multi-PIE, SFEW, and RAF database. The experimental results show that the proposed TDTLN method outperforms state-of-the-art methods.

Original languageEnglish
Article number8786815
Pages (from-to)108906-108915
Number of pages10
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 2019
Externally publishedYes

Keywords

  • Cross-domain facial expression recognition
  • VGGFace16-Net
  • transductive transfer learning

Fingerprint

Dive into the research topics of 'Cross-Domain Facial Expression Recognition Based on Transductive Deep Transfer Learning'. Together they form a unique fingerprint.

Cite this