TY - JOUR
T1 - 3D facial expression retargeting framework based on an identity-independent expression feature vector
AU - Tu, Ziqi
AU - Weng, Dongdong
AU - Liang, Bin
AU - Luo, Le
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/6
Y1 - 2023/6
N2 - One important aspect of multimedia application scenarios is the ability to control the facial expressions of virtual characters. One popular solution is to retarget the expressions of actors to virtual characters. Traditional 3D facial expression retargeting algorithms are mostly based on the Blendshape model. However, excessive reliance on the Blendshape model introduces several limitations. For example, the quality of the base expressions has a large influence on the expression retargeting results, requires large amounts of 3D face data, and must be calibrated for each user. We propose a 3D facial expression retargeting framework based on an identity-independent expression feature vector (hereafter referred to as the expression vector). This expression vector, which is related only to facial expressions, is originally extracted from face images; then, the corresponding expressions are transferred to the target (which can be any 3D face model) using V2ENet, a generative adversarial network (GAN)-structured model. Our framework requires only the expression vector and a neutral 3D face model to achieve natural and vivid expression retargeting, and it does not rely on the Blendshape model. When using the expression vector obtained from a cognitive perspective, our method can also perform 3D expression retargeting at the cognitive level. A series of experiments demonstrates that our method not only provides a simplified expression retargeting process but also achieves a better effect than the deformation transfer algorithm. The proposed framework is suitable for a wide range of applications and also achieves good expression retargeting for cartoon-style face models.
AB - One important aspect of multimedia application scenarios is the ability to control the facial expressions of virtual characters. One popular solution is to retarget the expressions of actors to virtual characters. Traditional 3D facial expression retargeting algorithms are mostly based on the Blendshape model. However, excessive reliance on the Blendshape model introduces several limitations. For example, the quality of the base expressions has a large influence on the expression retargeting results, requires large amounts of 3D face data, and must be calibrated for each user. We propose a 3D facial expression retargeting framework based on an identity-independent expression feature vector (hereafter referred to as the expression vector). This expression vector, which is related only to facial expressions, is originally extracted from face images; then, the corresponding expressions are transferred to the target (which can be any 3D face model) using V2ENet, a generative adversarial network (GAN)-structured model. Our framework requires only the expression vector and a neutral 3D face model to achieve natural and vivid expression retargeting, and it does not rely on the Blendshape model. When using the expression vector obtained from a cognitive perspective, our method can also perform 3D expression retargeting at the cognitive level. A series of experiments demonstrates that our method not only provides a simplified expression retargeting process but also achieves a better effect than the deformation transfer algorithm. The proposed framework is suitable for a wide range of applications and also achieves good expression retargeting for cartoon-style face models.
KW - 3D face model
KW - Deep learning
KW - Expression retargeting
KW - Virtual characters
UR - https://www.scopus.com/pages/publications/85148574868
U2 - 10.1007/s11042-023-14547-2
DO - 10.1007/s11042-023-14547-2
M3 - Article
AN - SCOPUS:85148574868
SN - 1380-7501
VL - 82
SP - 23017
EP - 23034
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 15
ER -