Multi-camera visual SLAM for off-road navigation

Yi Yang*, Di Tang, Dongsheng Wang, Wenjie Song, Junbo Wang, Mengyin Fu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

44 Citations (Scopus)

Abstract

With the rapid development of computer vision, vision-based simultaneous localization and mapping (vSLAM) plays an increasingly important role in the field of unmanned driving. However, traditional SLAM methods based on a monocular camera only perform well in simple indoor environments or urban environments with obvious structural features. In off-road environments, the situation that SLAM encounters could be complicated by problems such as direct sunlight, leaf occlusion, rough roads, sensor failure, sparsity of stably trackable texture. Traditional methods are highly susceptible to these factors, which lead to compromised stability and reliability. To counter such problems, we propose a panoramic vision SLAM method based on multi-camera collaboration, aiming at utilizing the characters of panoramic vision and stereo perception to improve the localization precision in off-road environments. At the same time, the independence and information sharing of each camera in multi-camera system can improve its ability to resist bumps, illumination, occlusion and sparse texture in an off-road environment, and enable our method to recover the metric scale. These characters ensure unmanned ground vehicles (UGVs) to locate and navigate safely and reliably in complex off-road environments.

Original languageEnglish
Article number103505
JournalRobotics and Autonomous Systems
Volume128
DOIs
Publication statusPublished - Jun 2020

Keywords

  • Multi-camera
  • Off-road
  • Panorama
  • Simultaneous localization and mapping

Fingerprint

Dive into the research topics of 'Multi-camera visual SLAM for off-road navigation'. Together they form a unique fingerprint.

Cite this