跳到主要导航 跳到搜索 跳到主要内容

Learning Anchor Transformations for 3D Garment Animation

  • Fang Zhao
  • , Zekun Li
  • , Shaoli Huang*
  • , Junwu Weng
  • , Tianfei Zhou
  • , Guo Sen Xie
  • , Jue Wang
  • , Ying Shan
  • *此作品的通讯作者
  • Tencent
  • Swiss Federal Institute of Technology Zurich
  • Nanjing University of Science and Technology

科研成果: 期刊稿件会议文章同行评审

摘要

This paper proposes an anchor-based deformation model, namely AnchorDEF, to predict 3D garment animation from a body motion sequence. It deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements. A set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices. Once the anchor transformations are found, per-vertex nonlinear displacements of the garment template can be regressed in a canonical space, which reduces the complexity of deformation space learning. By explicitly constraining the transformed anchors to satisfy the consistencies of position, normal and direction, the physical meaning of learned anchor transformations in space is guaranteed for better generalization. Furthermore, an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations. Qualitative and quantitative experiments on different types of garments demonstrate that AnchorDEF achieves the state-ofthe-art performance on 3D garment deformation prediction in motion, especially for loose-fitting garments.

源语言英语
页(从-至)491-500
页数10
期刊IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
DOI
出版状态已出版 - 2023
已对外发布
活动2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023 - Vancouver, 加拿大
期限: 18 6月 202322 6月 2023

指纹

探究 'Learning Anchor Transformations for 3D Garment Animation' 的科研主题。它们共同构成独一无二的指纹。

引用此