6D pose annotation and pose estimation method for weak-corner objects under low-light conditions

Zhi Hong Jiang, Jin Hong Chen, Ya Man Jing, Xiao Huang*, Hui Li*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

In unstructured environments such as disaster sites and mine tunnels, it is a challenge for robots to estimate the poses of objects under complex lighting backgrounds, which limit their operation. Owing to the shadows produced by a point light source, the brightness of the operation scene is seriously unbalanced, and it is difficult to accurately extract the features of objects. It is especially difficult to accurately label the poses of objects with weak corners and textures. This study proposes an automatic pose annotation method for such objects, which combine 3D-2D matching projection and rendering technology to improve the efficiency of dataset annotation. A 6D object pose estimation method under low-light conditions (LP_TGC) is then proposed, including (1) a light preprocessing neural network model based on a low-light preprocessing module (LPM) to balance the brightness of a picture and improve its quality; and (2) a 6D pose estimation model (TGC) based on the keypoint matching. Four typical datasets are constructed to verify our method, the experimental results validated and demonstrated the effectiveness of the proposed LP_TGC method. The estimation model based on the preprocessed image can accurately estimate the pose of the object in the mentioned unstructured environments, and it can improve the accuracy by an average of ∼3% based on the ADD metric.

Original languageEnglish
Pages (from-to)630-640
Number of pages11
JournalScience China Technological Sciences
Volume66
Issue number3
DOIs
Publication statusPublished - Mar 2023

Keywords

  • 6D object pose estimation
  • 6D pose annotation
  • low-light conditions

Fingerprint

Dive into the research topics of '6D pose annotation and pose estimation method for weak-corner objects under low-light conditions'. Together they form a unique fingerprint.

Cite this