TY - GEN
T1 - View-Independent Facial Action Unit Detection
AU - Tang, Chuangao
AU - Zheng, Wenming
AU - Yan, Jingwei
AU - Li, Qiang
AU - Li, Yang
AU - Zhang, Tong
AU - Cui, Zhen
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/28
Y1 - 2017/6/28
N2 - Automatic Facial Action Unit (AU) detection has drawn more and more attention over the past years due to its significance to facial expression analysis. Frontal-view AU detection has been extensively evaluated, but cross-pose AU detection is a less-touched problem due to the scarcity of the related dataset. The challenge of Facial Expression Recognition and Analysis (FERA2017) just released a large-scale videobased AU detection dataset across different facial poses. To deal with this challenging task, we develop a simple and efficient deep learning based system to detect AU occurrence under nine different facial views. In this system, we first crop out facial images by using morphology operations including binary segmentation, connected components labeling and region boundaries extraction, then for each type of AU, we train a corresponding expert network by specifically fine-tuning the VGG-Face network on cross-view facial images, so as to extract more discriminative features for the subsequent binary classification. In the AU detection sub-challenge, our proposed method achieves the mean accuracy of 77.8% (vs. the baseline 56.1%), and promotes the F1 score to 57.4% (vs. the baseline 45.2%).
AB - Automatic Facial Action Unit (AU) detection has drawn more and more attention over the past years due to its significance to facial expression analysis. Frontal-view AU detection has been extensively evaluated, but cross-pose AU detection is a less-touched problem due to the scarcity of the related dataset. The challenge of Facial Expression Recognition and Analysis (FERA2017) just released a large-scale videobased AU detection dataset across different facial poses. To deal with this challenging task, we develop a simple and efficient deep learning based system to detect AU occurrence under nine different facial views. In this system, we first crop out facial images by using morphology operations including binary segmentation, connected components labeling and region boundaries extraction, then for each type of AU, we train a corresponding expert network by specifically fine-tuning the VGG-Face network on cross-view facial images, so as to extract more discriminative features for the subsequent binary classification. In the AU detection sub-challenge, our proposed method achieves the mean accuracy of 77.8% (vs. the baseline 56.1%), and promotes the F1 score to 57.4% (vs. the baseline 45.2%).
UR - http://www.scopus.com/inward/record.url?scp=85026315573&partnerID=8YFLogxK
U2 - 10.1109/FG.2017.113
DO - 10.1109/FG.2017.113
M3 - Conference contribution
AN - SCOPUS:85026315573
T3 - Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017, Biometrics in the Wild, Bwild 2017, Heterogeneous Face Recognition, HFR 2017, Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, DCER and HPE 2017 and 3rd Facial Expression Recognition and Analysis Challenge, FERA 2017
SP - 878
EP - 882
BT - Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017, Biometrics in the Wild, Bwild 2017, Heterogeneous Face Recognition, HFR 2017, Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, DCER and HPE 2017 and 3rd Facial Expression Recognition and Analysis Challenge, FERA 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017
Y2 - 30 May 2017 through 3 June 2017
ER -