摘要
In autonomous vehicles, accurate extrinsic calibration for LiDAR and camera is an essential prerequisite for multi-sensor information fusion. Automatic and targetless extrinsic calibration has become the mainstream of academic research in recent years. However, existing automatic calibration methods that rely on edge or semantic features are unrobust, or require specific scene settings. In this paper, instance segmentation is used for automatic extrinsic calibration of the LiDAR and camera for the first time. Key targets from the segmented instances are extracted and correlated. Regarding the extrinsic calibration as an optimization problem, a novel cost function based on the matching degree of the appearance and centroids from the key targets of the point cloud and image pairs is formulated. Subsequently, differential evolution is used to minimize the cost function to obtain the optimal extrinsic parameters. Extensive experiments on the KITTI dataset and Waymo Open Dataset demonstrate the accuracy and robustness of the proposed method. The MAE of rotation and translation is less than 0.3 and 0.05 m respectively, which outperforms semantic-based and edge-based approaches in terms of accuracy.
源语言 | 英语 |
---|---|
页(从-至) | 981-988 |
页数 | 8 |
期刊 | IEEE Robotics and Automation Letters |
卷 | 8 |
期 | 2 |
DOI | |
出版状态 | 已出版 - 1 2月 2023 |