Monocular vision based robot self-localization

Jiaolong Yang*, Lei Chen, Wei Liang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Citations (Scopus)

Abstract

In this paper, we propose a position tracking method for robot self-localization with monocular vision. The robot is able to locate itself relying solely on its onboard monocular camera, and the localization result will not be affected by odometer error caused by wheel slippage. Our approach uses the Scale Invariant Feature Transform (SIFT) for feature detection and matching over consecutive frames to compute the fundamental matrix from epipolar geometry. Robust outlier elimination technique and iterative computation are combined to improve the robustness and accuracy of the estimation result of fundamental matrix. The motion parameters, including rotation matrix and displacement direction vector, are calculated with the fundamental matrix and the pre-calibrated intrinsic parameters. A recursive displacement computation algorithm is applied to solve the displacement length for position tracking. Experiments carried out in an indoor environment demonstrate the effectiveness of our approach.

Original languageEnglish
Title of host publication2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010
Pages1189-1193
Number of pages5
DOIs
Publication statusPublished - 2010
Event2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010 - Tianjin, China
Duration: 14 Dec 201018 Dec 2010

Publication series

Name2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010

Conference

Conference2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010
Country/TerritoryChina
CityTianjin
Period14/12/1018/12/10

Fingerprint

Dive into the research topics of 'Monocular vision based robot self-localization'. Together they form a unique fingerprint.

Cite this