Abstract
Over the past years, dictionary learning (DL) based methods have achieved excellent performance in facial expression recognition (FER), where training and testing data are usually presumed to have the same distributions. But in the practical scenarios, this assumption is often broken, especially when training and testing data come from different databases, a.k.a. the cross-database FER problem. In this paper, we focus on the unsupervised cross-domain FER problem where all the samples in target domain are completely unannotated. To address this problem, we propose an unsupervised domain adaptive dictionary learning (UDADL) model to bridge source domain and target domain by learning a shared dictionary. The encoding of the two domains on this dictionary are constrained to be mutually embedded on each other. To bypass the solution complexity, we borrow an analysis dictionary to seek for approximate solutions as the latent variable to favor sub-solvers to be analyzed. To evaluate the performance of the proposed UDADL model, we conduct extensive experiments on the widely used Multi-PIE and BU-3DFE databases. The experimental results demonstrated that the proposed UDADL method outperforms recent domain adaptation FER methods and achieved the state-of-the-art performance.
Original language | English |
---|---|
Pages (from-to) | 84-91 |
Number of pages | 8 |
Journal | Neurocomputing |
Volume | 319 |
DOIs | |
Publication status | Published - 30 Nov 2018 |
Externally published | Yes |
Keywords
- Cross-domain facial expression recognition
- Dictionary learning
- Domain adaptation
- Facial expression recognition