Targetless Extrinsic Calibration of Camera and Low-Resolution 3-D LiDAR

Ni Ou, Hanyu Cai, Jiawen Yang, Junzheng Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Autonomous driving heavily relies on light detection and ranging (LiDAR) and camera sensors, which can significantly improve the performance of perception and navigation tasks when fused together. The success of this cross-modality fusion hinges on accurate extrinsic calibration. In recent years, targetless LiDAR-camera calibration methods have gained increasing attention, thanks to their independence from external targets. Nevertheless, developing a targetless method for low-resolution LiDARs remains challenging due to the difficulty in extracting reliable features from point clouds with limited LiDAR beams. In this article, we propose a robust targetless method to solve this struggling problem. It can automatically estimate accurate LiDAR and camera poses and solve the extrinsic matrix through hand-eye calibration. Moreover, we also carefully analyze pose estimation issues existing in the low-resolution LiDAR and present our solution. Real-world experiments are carried out on an unmanned ground vehicle (UGV)-mounted multisensor platform containing a charged-coupled device (CCD) camera and a VLP-16 LiDAR. For evaluation, we use a state-of-the-art target-based calibration approach to generate the ground truth extrinsic parameters. Experimental results demonstrate that our method achieves low calibration error in both translation (3 cm) and rotation (0.59°).

Original languageEnglish
Pages (from-to)10889-10899
Number of pages11
JournalIEEE Sensors Journal
Volume23
Issue number10
DOIs
Publication statusPublished - 15 May 2023

Keywords

  • Camera
  • light detection and ranging (LiDAR)
  • pose graph optimization
  • sensor calibration
  • simultaneous localization and mapping (SLAM)

Fingerprint

Dive into the research topics of 'Targetless Extrinsic Calibration of Camera and Low-Resolution 3-D LiDAR'. Together they form a unique fingerprint.

Cite this