摘要
In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera, which combines the merits of both feature based and direct based methods. The presented system directly estimates the camera motion of two consecutive RGB-D frames by minimizing the photometric error. To permit outliers and noise, a robust sensor model built upon the t-distribution and an error function mixing depth and photometric errors are used to enhance the accuracy and robustness. Local graph optimization based on key frames is used to reduce the accumulative error and refine the local map. The loop closure detection method, which combines the appearance similarity method and spatial location constraints method, increases the speed of detection. Experimental results demonstrate that the proposed approach achieves higher accuracy on the motion estimation and environment reconstruction compared to the other state-of-the-art methods. Moreover, the proposed approach works in real-time on a laptop without a GPU, which makes it attractive for robots equipped with limited computational resources.
源语言 | 英语 |
---|---|
页(从-至) | 83-93 |
页数 | 11 |
期刊 | Journal of Beijing Institute of Technology (English Edition) |
卷 | 28 |
期 | 1 |
DOI | |
出版状态 | 已出版 - 1 3月 2019 |