SR-net: satellite relative pose estimation network for a noncooperative target via RGB images

Di Su, Cheng Zhang*, Zhisheng Chen, Ruijing Ji

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Space exploration has drawn increasing attention to space control technology. For debris removal missions and on-orbit servicing, accurate pose estimation of a noncooperative target is critical. This article introduces the satellite relative pose estimation network (SR-Net) two-stage training method for a noncooperative target via RGB images. As the first stage in regressing the 3D translation, we combined the detection and translation regression modules into a single model. SR-Net decouples the translation and rotation information in stage two by utilizing classification instead of regression, using the detected picture as input and fitting a rotation by minimizing the weighted least squares. Furthermore, a large-scale dataset for 6-DoF pose estimation is introduced, which can be utilized as a benchmark for various state-of-the-art monocular vision-based 6-DoF pose estimation methods. Ablation studies are used to verify the effectiveness and scalability of each module. SR-Net can be added to a baseline model as a separate module to improve the 6-DoF pose estimation accuracy for noncooperative targets. The results are extremely encouraging since they show that using only vision data, it is feasible to accurately estimate the 6-DoF pose of a noncooperative target.

Original languageEnglish
Pages (from-to)31557-31573
Number of pages17
JournalMultimedia Tools and Applications
Volume82
Issue number20
DOIs
Publication statusPublished - Aug 2023

Keywords

  • CNN
  • Noncooperative target
  • Object detection
  • Pose estimation
  • Weighted least squares

Fingerprint

Dive into the research topics of 'SR-net: satellite relative pose estimation network for a noncooperative target via RGB images'. Together they form a unique fingerprint.

Cite this