Learning Anchor Transformations for 3D Garment Animation

Fang Zhao, Zekun Li, Shaoli Huang*, Junwu Weng, Tianfei Zhou, Guo Sen Xie, Jue Wang, Ying Shan

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

6 Citations (Scopus)

Abstract

This paper proposes an anchor-based deformation model, namely AnchorDEF, to predict 3D garment animation from a body motion sequence. It deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements. A set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices. Once the anchor transformations are found, per-vertex nonlinear displacements of the garment template can be regressed in a canonical space, which reduces the complexity of deformation space learning. By explicitly constraining the transformed anchors to satisfy the consistencies of position, normal and direction, the physical meaning of learned anchor transformations in space is guaranteed for better generalization. Furthermore, an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations. Qualitative and quantitative experiments on different types of garments demonstrate that AnchorDEF achieves the state-ofthe-art performance on 3D garment deformation prediction in motion, especially for loose-fitting garments.

Original languageEnglish
Pages (from-to)491-500
Number of pages10
JournalIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023 - Vancouver, Canada
Duration: 18 Jun 202322 Jun 2023

Fingerprint

Dive into the research topics of 'Learning Anchor Transformations for 3D Garment Animation'. Together they form a unique fingerprint.

Cite this