Visual-Inertial-Laser SLAM Based on ORB-SLAM3

Meng Cao*, Jia Zhang*, Wenjie Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

At present, visual simultaneous localization and mapping is a hot topic in the field of unmanned systems, which is popular among academic workers because of its advantages of accurate localization, low cost, large amount of information, and wide range of applications, but it still has some problems, including the camera's vulnerability to the number of feature points and the noise impact of the inertial measurement unit during uniform linear motion. In response to the above problem this paper carries out the research on multi-sensor fusion localization algorithm, the main work is as follows: Based on ORB-SLAM3, a visual-inertial-laser SLAM algorithm is designed. The relative motion of laser location between image frames is obtained from the data of 2D Lidar and laser height sensor. The relative motion of inertial measurement unit between image frames is obtained from inertial measurement unit preintegration. Based on the method of factor graph optimization, the pose of image frame is optimized by reprojection of map point, relative motion increment of inertial measurement unit, and relative motion increment of laser location. The algorithm improves the localization accuracy by about 24.4% over the ORB-SLAM3 visual mode and about 22.6% over the ORB-SLAM3 visual-inertial mode on the data of the UAV physical platform.

Original languageEnglish
Pages (from-to)903-912
Number of pages10
JournalUnmanned Systems
Volume12
Issue number5
DOIs
Publication statusPublished - 1 Sept 2024

Keywords

  • IMU
  • Lidar
  • Visual SLAM
  • graph optimization
  • multi-sensor fusion

Fingerprint

Dive into the research topics of 'Visual-Inertial-Laser SLAM Based on ORB-SLAM3'. Together they form a unique fingerprint.

Cite this