Robust localization system fusing vision and lidar under severe occlusion

Yongliang Shi, Weimin Zhang*, Fangxing Li, Qiang Huang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

13 引用 (Scopus)

摘要

Localization is one of the most fundamental problems for mobile robot. Aiming at the phenomenon that robot is prone to be lost in navigation under severe occlusion, a robust localization system combining vision and lidar is proposed in this paper. The system is split into off-line stage and online stage. In the off-line stage, this paper introduces a method of actively detecting and recording visual landmarks, and an off-line visual bag-of-words is generated from the recorded landmarks training. In the online stage, the prediction and update phase of Adaptive Monte Carlo Localization (AMCL) are improved respectively to enhance the performance of localization. The prediction phase generates the proposal distribution according to the prior information obtained through retrieving visual landmarks, and the newly proposed measurement model that selects reliable beams of lidar as the observation is to update the prediction. Experiments is carried out under strict conditions, that is 60% of the lidar is occluded, 1/12 of the beams are regarded as observation, and only 300 particles were adopted at most, it is shown that, no matter in the global localization or pose tracking, the localization system proposed in this paper performs much better than the state of art localization algorithm AMCL.

源语言英语
文章编号9040631
页(从-至)62495-62504
页数10
期刊IEEE Access
8
DOI
出版状态已出版 - 2020

指纹

探究 'Robust localization system fusing vision and lidar under severe occlusion' 的科研主题。它们共同构成独一无二的指纹。

引用此