Acquiring identity and expression information from monocular face image

Ziqi Tu, Dongdong Weng*, Bin Liang, Le Luo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, a flexible deep learning-based framework is proposed that can extract expression and identity information from monocular images and can combine the extracted identity and expression from different images to generate new face models. In this framework, two encoders are used to extract expression and identity information, and three decoders are used to visualize the information by generating face models containing only expression, only identity, and fused expression and identity. By aligning the corresponding vertices of the parts with the same semantic on the face, an error evaluation method between face models with different topologies is proposed, which can more intuitively reflect the error distribution. The experimental results show that the proposed framework has higher accuracy than face component extraction by blendshape. The framework can be used for the facial expression generation of virtual humans, which is helpful for emotion transmission and language supplementation.

Original languageEnglish
Pages (from-to)609-620
Number of pages12
JournalJournal of the Society for Information Display
Volume30
Issue number8
DOIs
Publication statusPublished - Aug 2022

Keywords

  • face information extraction
  • facial expression
  • mixed reality
  • virtual human

Fingerprint

Dive into the research topics of 'Acquiring identity and expression information from monocular face image'. Together they form a unique fingerprint.

Cite this