Research of Facial Expression Recognition Based on Deep Learning

Linhao Zhang, Yuliang Yang, Wanchong Li, Shuai Dang, Mengyu Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

This paper proposes a convolutional neural network for facial expression recognition (FER) based on deep learning, named FERNet. FERNet contains 4 residual depth-wise separable convolution modules., each of which includes 3 depthwise separable convolution layers and 1 standard convolution layer. It is a fully convolutional neural network that replaces the fully connected layer with global average pool (GAP) layer. The results show that the average accuracy of FERNet in the KDEF dataset is 93.7%, and the average accuracy of the RAF dataset is 71.9%. Compared with other networks and methods, FERNet has a better performance in facial expression recognition.

Original languageEnglish
Title of host publicationICSESS 2018 - Proceedings of 2018 IEEE 9th International Conference on Software Engineering and Service Science
EditorsLi Wenzheng, M. Surendra Prasad Babu
PublisherIEEE Computer Society
Pages688-691
Number of pages4
ISBN (Electronic)9781538665640
DOIs
Publication statusPublished - 2 Jul 2018
Externally publishedYes
Event9th IEEE International Conference on Software Engineering and Service Science, ICSESS 2018 - Beijing, China
Duration: 23 Nov 201825 Nov 2018

Publication series

NameProceedings of the IEEE International Conference on Software Engineering and Service Sciences, ICSESS
Volume2018-November
ISSN (Print)2327-0586
ISSN (Electronic)2327-0594

Conference

Conference9th IEEE International Conference on Software Engineering and Service Science, ICSESS 2018
Country/TerritoryChina
CityBeijing
Period23/11/1825/11/18

Keywords

  • FER
  • FERNet
  • depth-wise separable convolution
  • residual block

Fingerprint

Dive into the research topics of 'Research of Facial Expression Recognition Based on Deep Learning'. Together they form a unique fingerprint.

Cite this