Expression retargeting from images to three-dimensional face models represented in texture space

Ziqi Tu, Dongdong Weng*, Bin Liang, Le Luo

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Facial expressions play a very important role in creating a vivid and realistic virtual character. This paper proposes a convolutional neural network-based facial expression retargeting framework named EXPUV-Net, which can extract expression information from an image and transfer it to a specified 3D face. With the help of the nonlinear generator, the proposed framework can generate rich facial expressions and does not rely on the blendshape model. A data augmentation approach is adopted in this paper, which can improve the adaptability of the proposed framework. The framework can realize expression retargeting for face models with different topologies. This paper also demonstrates the effectiveness of the proposed framework through some experiments and compares it with some linear face model-based methods for facial component extraction and face reconstruction. The results show that the proposed framework has better performance than the linear methods. The proposed framework is very helpful for facial expression generation for virtual characters.

源语言英语
页(从-至)775-788
页数14
期刊Journal of the Society for Information Display
30
10
DOI
出版状态已出版 - 10月 2022

指纹

探究 'Expression retargeting from images to three-dimensional face models represented in texture space' 的科研主题。它们共同构成独一无二的指纹。

引用此