SDPENetv2: Spacecraft Pose Estimation Network With Learnable Token Head Based on Discrete Pose Weights

  • Hang Zhou
  • , Lu Yao
  • , Haoping She*
  • , Weiyong Si*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Reliable pose estimation of non-cooperative spacecraft is a key technology for on-orbit servicing and active space debris removal missions. Currently, deep learning has become the mainstream method for spacecraft pose estimation. However, existing methods suffer from problems such as excessive parameters, high computational complexity, and relatively low efficiency in feature utilization. This letter proposes SDPENetv2 to address these issues. We represent the pose of the target spacecraft using discrete pose weights. By introducing an additional constraint term into the loss function, the network can learn a more appropriate weight distribution in the later stages of training, improving the accuracy of pose estimation. In addition, we propose the learnable token head, which possesses global attention and can make more comprehensive use of the features extracted by the convolutional neural network. Experiments on the SPEED dataset demonstrate that the position and attitude estimation error of SDPENetv2 are reduced to 0.105 m and 1.145◦, respectively. Compared with other works, these errors are reduced by 20.45% –86.59% and 32.65% –91.80%, respectively. Additionally, SDPENetv2 has only 5.6 M parameters and a computational complexity of merely 1.542 GMACs.

Original languageEnglish
Pages (from-to)13034-13041
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume10
Issue number12
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • aerial systems: Applications
  • AI-based methods
  • computer vision for automation

Fingerprint

Dive into the research topics of 'SDPENetv2: Spacecraft Pose Estimation Network With Learnable Token Head Based on Discrete Pose Weights'. Together they form a unique fingerprint.

Cite this