Fusion-competition framework of local topology and global texture for head pose estimation

Dongsheng Ma, Tianyu Fu*, Yifei Yang, Kaibin Cao, Jingfan Fan, Deqiang Xiao, Hong Song, Ying Gu, Jian Yang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

RGB image and point cloud involve texture and geometric structure, which are widely used for head pose estimation. However, images lack of spatial information, and the quality of point cloud is easily affected by sensor noise. In this paper, a novel fusion-competition framework (FCF) is proposed to overcome the limitations of a single modality. The global texture information is extracted from image and the local topology information is extracted from point cloud to project heterogeneous data into a common feature subspace. The projected texture feature weighted by the channel attention mechanism is embedded into each local point cloud region with different topological features for fusion. The scoring mechanism creates competition among the regions involving local-global fused features to predict final pose with the highest score. According to the evaluation results on the public and our constructed datasets, the FCF improves the estimation accuracy and stability by an average of 13.6 % and 12.7 %, which is compared to nine state-of-the-art methods.

Original languageEnglish
Article number110285
JournalPattern Recognition
Volume149
DOIs
Publication statusPublished - May 2024

Keywords

  • Feature channel attention
  • Feature fusion
  • Head pose estimation
  • Local regions competition
  • Point cloud
  • RGB image

Fingerprint

Dive into the research topics of 'Fusion-competition framework of local topology and global texture for head pose estimation'. Together they form a unique fingerprint.

Cite this