LightVO: Lightweight inertial-assisted monocular visual odometry with dense neural networks

Zibin Guo, Mingkun Yang, Ninghao Chen, Zhuoling Xiao, Bo Yan, Shuisheng Lin, Liang Zhou

Research output: Contribution to journalConference articlepeer-review

2 Citations (Scopus)

Abstract

Monocular visual odometry (VO) is one of the most practical ways in vehicle autonomous positioning, through which a vehicle can automatically locate itself in a completely unknown environment. Although some existing VO algorithms have proved the superiority, they usually need another precise adjustment to operate well when using a different camera or in different environments. The existing VO methods based on deep learning require few manual calibration, but most of them occupy a tremendous amount of computing resources and cannot realize real-time VO. We propose a highly real-time VO system based on the optical flow and DenseNet structure accompanied with the inertial measurement unit (IMU). It cascade the optical flow network and DenseNet structure to calculate the translation and rotation, using the calculated information and IMU for construction and self- correction of the map. We have verified its computational complexity and performance on the KITTI dataset. The experiments have shown that the proposed system only requires less than 50% computation power than the main stream deep learning VO. It can also achieve 30% higher translation accuracy as well.

Original languageEnglish
Article number9013757
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
Publication statusPublished - 2019
Externally publishedYes
Event2019 IEEE Global Communications Conference, GLOBECOM 2019 - Waikoloa, United States
Duration: 9 Dec 201913 Dec 2019

Keywords

  • Image sequences
  • IMU
  • Neural network
  • Visual odometry

Fingerprint

Dive into the research topics of 'LightVO: Lightweight inertial-assisted monocular visual odometry with dense neural networks'. Together they form a unique fingerprint.

Cite this