Visual-LiDAR Odometry and Mapping with Monocular Scale Correction and Visual Bootstrapping

Hanyu Cai, Ni Ou, Junzheng Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

This paper presents a novel visual-LiDAR odometry and mapping method with low-drift characteristics. The proposed method is based on two popular approaches, ORB-SLAM and A-LOAM, with monocular scale correction and visual-bootstrapped LiDAR poses initialization modifications. The scale corrector calculates the proportion between the depth of image keypoints recovered by triangulation and that provided by LiDAR, using an outlier rejection process for accuracy improvement. Concerning LiDAR poses initialization, the visual odometry approach gives the initial guesses of LiDAR motions for better performance. This methodology is not only applicable to high-resolution LiDAR but can also adapt to low-resolution LiDAR. To evaluate the proposed SLAM system's robustness and accuracy, we conducted experiments on the KITTI Odometry and S3E datasets. Experimental results illustrate that our method significantly outperforms standalone ORB-SLAM2 and A-LOAM. Furthermore, regarding the accuracy of visual odometry with scale correction, our method performs similarly to the stereo-mode ORB-SLAM2.

Original languageEnglish
Title of host publicationProceedings of the 11th European Conference on Mobile Robots, ECMR 2023
EditorsLino Marques, Ivan Markovic
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350307047
DOIs
Publication statusPublished - 2023
Event11th European Conference on Mobile Robots, ECMR 2023 - Coimbra, Portugal
Duration: 4 Sept 20237 Sept 2023

Publication series

NameProceedings of the 11th European Conference on Mobile Robots, ECMR 2023

Conference

Conference11th European Conference on Mobile Robots, ECMR 2023
Country/TerritoryPortugal
CityCoimbra
Period4/09/237/09/23

Fingerprint

Dive into the research topics of 'Visual-LiDAR Odometry and Mapping with Monocular Scale Correction and Visual Bootstrapping'. Together they form a unique fingerprint.

Cite this