Expression retargeting from images to three-dimensional face models represented in texture space

Ziqi Tu, Dongdong Weng*, Bin Liang, Le Luo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Facial expressions play a very important role in creating a vivid and realistic virtual character. This paper proposes a convolutional neural network-based facial expression retargeting framework named EXPUV-Net, which can extract expression information from an image and transfer it to a specified 3D face. With the help of the nonlinear generator, the proposed framework can generate rich facial expressions and does not rely on the blendshape model. A data augmentation approach is adopted in this paper, which can improve the adaptability of the proposed framework. The framework can realize expression retargeting for face models with different topologies. This paper also demonstrates the effectiveness of the proposed framework through some experiments and compares it with some linear face model-based methods for facial component extraction and face reconstruction. The results show that the proposed framework has better performance than the linear methods. The proposed framework is very helpful for facial expression generation for virtual characters.

Original languageEnglish
Pages (from-to)775-788
Number of pages14
JournalJournal of the Society for Information Display
Volume30
Issue number10
DOIs
Publication statusPublished - Oct 2022

Keywords

  • data augmentation method
  • expression information extraction
  • expression retargeting
  • mixed reality
  • virtual character

Fingerprint

Dive into the research topics of 'Expression retargeting from images to three-dimensional face models represented in texture space'. Together they form a unique fingerprint.

Cite this