TY - GEN
T1 - LMVI-SLAM
T2 - 2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
AU - Hao, Luoying
AU - Li, Hongjian
AU - Zhang, Qieshi
AU - Hu, Xiping
AU - Cheng, Jun
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - Visual-inertial simultaneous localization and mapping (SLAM) shows significant progress in recent years due to the complementary nature of the visual and inertial sensor but still, challenges remain for low-light environments. Recent visual-inertial SLAMs often drift or even fail in low-light conditions due to insufficient 3D-2D correspondences for bundle adjustment. To address the issue, this paper performs image preprocessing firstly with a united image enhancement method involving adaptive gamma correction and contrast limited adaptive histogram equalization, which could ameliorate the brightness and contrast of the image greatly. Moreover, we track features using optical flow for adequate point correspondences in dim-light environments, and supplement the corresponding map points continually by insert keyframe and triangulation to keep tracking. Finally, we construct a tightly-coupled nonlinear optimization model, which combines a feature reprojection error on point correspondences and IMU measurement by pre-integration to constrain and compensate each other for more accurate pose estimation. We validate the performance of our algorithm on public dataset and real-word experiments with a mobile robot, including dark laboratory, etc., and compare against existing state-of-the-art visual-inertial algorithms. Experimental results indicate our algorithm outperforms other state-of-the-art SLAMs in accuracy and robustness, and works reliably well for both general and low-light environments.
AB - Visual-inertial simultaneous localization and mapping (SLAM) shows significant progress in recent years due to the complementary nature of the visual and inertial sensor but still, challenges remain for low-light environments. Recent visual-inertial SLAMs often drift or even fail in low-light conditions due to insufficient 3D-2D correspondences for bundle adjustment. To address the issue, this paper performs image preprocessing firstly with a united image enhancement method involving adaptive gamma correction and contrast limited adaptive histogram equalization, which could ameliorate the brightness and contrast of the image greatly. Moreover, we track features using optical flow for adequate point correspondences in dim-light environments, and supplement the corresponding map points continually by insert keyframe and triangulation to keep tracking. Finally, we construct a tightly-coupled nonlinear optimization model, which combines a feature reprojection error on point correspondences and IMU measurement by pre-integration to constrain and compensate each other for more accurate pose estimation. We validate the performance of our algorithm on public dataset and real-word experiments with a mobile robot, including dark laboratory, etc., and compare against existing state-of-the-art visual-inertial algorithms. Experimental results indicate our algorithm outperforms other state-of-the-art SLAMs in accuracy and robustness, and works reliably well for both general and low-light environments.
KW - Low light
KW - Sensor fusion
KW - Visual-inertial SLAM
UR - http://www.scopus.com/inward/record.url?scp=85079055890&partnerID=8YFLogxK
U2 - 10.1109/ROBIO49542.2019.8961635
DO - 10.1109/ROBIO49542.2019.8961635
M3 - Conference contribution
AN - SCOPUS:85079055890
T3 - IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
SP - 272
EP - 277
BT - IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 6 December 2019 through 8 December 2019
ER -