Abstract
In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera, which combines the merits of both feature based and direct based methods. The presented system directly estimates the camera motion of two consecutive RGB-D frames by minimizing the photometric error. To permit outliers and noise, a robust sensor model built upon the t-distribution and an error function mixing depth and photometric errors are used to enhance the accuracy and robustness. Local graph optimization based on key frames is used to reduce the accumulative error and refine the local map. The loop closure detection method, which combines the appearance similarity method and spatial location constraints method, increases the speed of detection. Experimental results demonstrate that the proposed approach achieves higher accuracy on the motion estimation and environment reconstruction compared to the other state-of-the-art methods. Moreover, the proposed approach works in real-time on a laptop without a GPU, which makes it attractive for robots equipped with limited computational resources.
Original language | English |
---|---|
Pages (from-to) | 83-93 |
Number of pages | 11 |
Journal | Journal of Beijing Institute of Technology (English Edition) |
Volume | 28 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Mar 2019 |
Keywords
- 3D mapping
- Localization
- Loop closure detection
- RGB-D simultaneous localization and mapping(SLAM)
- Visual odometry