View-Independent Facial Action Unit Detection

Chuangao Tang, Wenming Zheng*, Jingwei Yan, Qiang Li, Yang Li, Tong Zhang, Zhen Cui

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

31 Citations (Scopus)

Abstract

Automatic Facial Action Unit (AU) detection has drawn more and more attention over the past years due to its significance to facial expression analysis. Frontal-view AU detection has been extensively evaluated, but cross-pose AU detection is a less-touched problem due to the scarcity of the related dataset. The challenge of Facial Expression Recognition and Analysis (FERA2017) just released a large-scale videobased AU detection dataset across different facial poses. To deal with this challenging task, we develop a simple and efficient deep learning based system to detect AU occurrence under nine different facial views. In this system, we first crop out facial images by using morphology operations including binary segmentation, connected components labeling and region boundaries extraction, then for each type of AU, we train a corresponding expert network by specifically fine-tuning the VGG-Face network on cross-view facial images, so as to extract more discriminative features for the subsequent binary classification. In the AU detection sub-challenge, our proposed method achieves the mean accuracy of 77.8% (vs. the baseline 56.1%), and promotes the F1 score to 57.4% (vs. the baseline 45.2%).

Original languageEnglish
Title of host publicationProceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017, Biometrics in the Wild, Bwild 2017, Heterogeneous Face Recognition, HFR 2017, Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, DCER and HPE 2017 and 3rd Facial Expression Recognition and Analysis Challenge, FERA 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages878-882
Number of pages5
ISBN (Electronic)9781509040230
DOIs
Publication statusPublished - 28 Jun 2017
Externally publishedYes
Event12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - Washington, United States
Duration: 30 May 20173 Jun 2017

Publication series

NameProceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017, Biometrics in the Wild, Bwild 2017, Heterogeneous Face Recognition, HFR 2017, Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, DCER and HPE 2017 and 3rd Facial Expression Recognition and Analysis Challenge, FERA 2017

Conference

Conference12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017
Country/TerritoryUnited States
CityWashington
Period30/05/173/06/17

Fingerprint

Dive into the research topics of 'View-Independent Facial Action Unit Detection'. Together they form a unique fingerprint.

Cite this