Targetless Lidar-Camera Calibration via Cross-Modality Structure Consistency

Ni Ou, Hanyu Cai, Junzheng Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Lidar and cameras serve as essential sensors for automated vehicles and intelligent robots, and they are frequently fused in complicated tasks. Precise extrinsic calibration is the prerequisite of Lidar-camera fusion. Hand-eye calibration is almost the most commonly used targetless calibration approach. This article presents a particular degeneration problem of hand-eye calibration when sensor motions lack rotation. This context is common for ground vehicles, especially those traveling on urban roads, leading to a significant deterioration in translational calibration performance. To address this problem, we propose a novel targetless Lidar-camera calibration method based on cross-modality structure consistency. Our proposed method utilizes cross-modality structure consistency and ensures global convergence within a large search range. Moreover, it achieves highly accurate translation calibration even in challenging scenarios. Through extensive experimentation, we demonstrate that our approach outperforms three other state-of-the-art targetless calibration methods across various metrics. Furthermore, we conduct an ablation study to validate the effectiveness of each module within our framework.

Original languageEnglish
Pages (from-to)2636-2648
Number of pages13
JournalIEEE Transactions on Intelligent Vehicles
Volume9
Issue number1
DOIs
Publication statusPublished - 1 Jan 2024

Keywords

  • Calibration
  • automated vehicles
  • camera
  • lidar

Fingerprint

Dive into the research topics of 'Targetless Lidar-Camera Calibration via Cross-Modality Structure Consistency'. Together they form a unique fingerprint.

Cite this