TY - JOUR
T1 - Learning Anchor Transformations for 3D Garment Animation
AU - Zhao, Fang
AU - Li, Zekun
AU - Huang, Shaoli
AU - Weng, Junwu
AU - Zhou, Tianfei
AU - Xie, Guo Sen
AU - Wang, Jue
AU - Shan, Ying
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - This paper proposes an anchor-based deformation model, namely AnchorDEF, to predict 3D garment animation from a body motion sequence. It deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements. A set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices. Once the anchor transformations are found, per-vertex nonlinear displacements of the garment template can be regressed in a canonical space, which reduces the complexity of deformation space learning. By explicitly constraining the transformed anchors to satisfy the consistencies of position, normal and direction, the physical meaning of learned anchor transformations in space is guaranteed for better generalization. Furthermore, an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations. Qualitative and quantitative experiments on different types of garments demonstrate that AnchorDEF achieves the state-ofthe-art performance on 3D garment deformation prediction in motion, especially for loose-fitting garments.
AB - This paper proposes an anchor-based deformation model, namely AnchorDEF, to predict 3D garment animation from a body motion sequence. It deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements. A set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices. Once the anchor transformations are found, per-vertex nonlinear displacements of the garment template can be regressed in a canonical space, which reduces the complexity of deformation space learning. By explicitly constraining the transformed anchors to satisfy the consistencies of position, normal and direction, the physical meaning of learned anchor transformations in space is guaranteed for better generalization. Furthermore, an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations. Qualitative and quantitative experiments on different types of garments demonstrate that AnchorDEF achieves the state-ofthe-art performance on 3D garment deformation prediction in motion, especially for loose-fitting garments.
UR - http://www.scopus.com/inward/record.url?scp=85175176646&partnerID=8YFLogxK
U2 - 10.1109/CVPR52729.2023.00055
DO - 10.1109/CVPR52729.2023.00055
M3 - Conference article
AN - SCOPUS:85175176646
SN - 2160-7508
SP - 491
EP - 500
JO - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
JF - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023
Y2 - 18 June 2023 through 22 June 2023
ER -