Fusion-competition framework of local topology and global texture for head pose estimation

Dongsheng Ma, Tianyu Fu*, Yifei Yang, Kaibin Cao, Jingfan Fan, Deqiang Xiao, Hong Song, Ying Gu, Jian Yang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

RGB image and point cloud involve texture and geometric structure, which are widely used for head pose estimation. However, images lack of spatial information, and the quality of point cloud is easily affected by sensor noise. In this paper, a novel fusion-competition framework (FCF) is proposed to overcome the limitations of a single modality. The global texture information is extracted from image and the local topology information is extracted from point cloud to project heterogeneous data into a common feature subspace. The projected texture feature weighted by the channel attention mechanism is embedded into each local point cloud region with different topological features for fusion. The scoring mechanism creates competition among the regions involving local-global fused features to predict final pose with the highest score. According to the evaluation results on the public and our constructed datasets, the FCF improves the estimation accuracy and stability by an average of 13.6 % and 12.7 %, which is compared to nine state-of-the-art methods.

源语言英语
文章编号110285
期刊Pattern Recognition
149
DOI
出版状态已出版 - 5月 2024

指纹

探究 'Fusion-competition framework of local topology and global texture for head pose estimation' 的科研主题。它们共同构成独一无二的指纹。

引用此