HandAttNet: Attention 3D Hand Mesh Estimation Network

Jintao Sun, Gangyi Ding

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Hand pose estimation and reconstruction become increasingly compelling in the metaverse era. But in reality hands are often heavily occluded, which makes the estimation of occluded 3D hand meshes challenging.Previous work tends to ignore the information of the occluded regions, we believe that the occluded regions hand information can be highly utilized, Therefore, in this study, we propose hand mesh estimation network, HandAttNet.We design the cross-attention mechanism module and the DUO-FIT module to inject hand information into the occluded region.Finally, we use the self-attention regression module for 3D hand mesh estimation.Our HandAttNet achieves SOTA performance.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages645-646
Number of pages2
ISBN (Electronic)9798350348392
DOIs
Publication statusPublished - 2023
Event2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023 - Shanghai, China
Duration: 25 Mar 202329 Mar 2023

Publication series

NameProceedings - 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023

Conference

Conference2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023
Country/TerritoryChina
CityShanghai
Period25/03/2329/03/23

Fingerprint

Dive into the research topics of 'HandAttNet: Attention 3D Hand Mesh Estimation Network'. Together they form a unique fingerprint.

Cite this