A Deep Neural Network-Driven Feature Learning Method for Multi-view Facial Expression Recognition

Tong Zhang, Wenming Zheng*, Zhen Cui, Yuan Zong, Jingwei Yan, Keyu Yan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

298 Citations (Scopus)

Abstract

In this paper, a novel deep neural network (DNN)-driven feature learning method is proposed and applied to multi-view facial expression recognition (FER). In this method, scale invariant feature transform (SIFT) features corresponding to a set of landmark points are first extracted from each facial image. Then, a feature matrix consisting of the extracted SIFT feature vectors is used as input data and sent to a well-designed DNN model for learning optimal discriminative features for expression classification. The proposed DNN model employs several layers to characterize the corresponding relationship between the SIFT feature vectors and their corresponding high-level semantic information. By training the DNN model, we are able to learn a set of optimal features that are well suitable for classifying the facial expressions across different facial views. To evaluate the effectiveness of the proposed method, two nonfrontal facial expression databases, namely BU-3DFE and Multi-PIE, are respectively used to testify our method and the experimental results show that our algorithm outperforms the state-of-the-art methods.

Original languageEnglish
Article number7530823
Pages (from-to)2528-2536
Number of pages9
JournalIEEE Transactions on Multimedia
Volume18
Issue number12
DOIs
Publication statusPublished - Dec 2016
Externally publishedYes

Keywords

  • Deep neural network (DNN)
  • multi-view facial expression recognition
  • scale invariant feature transform (SIFT)

Fingerprint

Dive into the research topics of 'A Deep Neural Network-Driven Feature Learning Method for Multi-view Facial Expression Recognition'. Together they form a unique fingerprint.

Cite this