TY - GEN
T1 - Mismatch Removal of Visual Odometry using KLT danger-points tracking and suppression
AU - Nie, Fuyu
AU - Zhang, Weimin
AU - Li, Fangxing
AU - Shi, Yongliang
AU - Guo, Ziyuan
AU - Wang, Yang
AU - Huang, Qiang
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Visual odometry (VO) is a technique to transform front-end visual observation to pose transformation. In simultaneous localization and mapping (SLAM) based on visual odometry, mismatch of features can lead to high uncertainty and inaccurate state estimation. Although RANSAC (RANdom SAmple Consensus) can reject the outlier with iterative sampling among all feature points, it only eliminate the mismatch instead of finding a better match to replace it. In this paper, we introduce an algorithm to reject the mismatch in visual odometry and find a better match if possible. Our approach start with a self-match of latest camera frame in order to detect the danger-point probably leading to mismatch for every feature. KLT (Kanade-Lucas-Tomasi) optical flow tracking method is used to predict the motion of danger-point in next frame, where we form a danger-area of mismatch. We additionally apply suppression in this area by adding an extra Hamming distance in Gaussian distribution to the points in the area. Therefore, mismatch can be removed with extra Hamming distance added. We integrate the algorithm on ROS (Robot Operating System) and record a series of video data sets. Then we apply our algorithm to the video stream and successfully remove the mismatch difficult to be rejected by RANSAC.
AB - Visual odometry (VO) is a technique to transform front-end visual observation to pose transformation. In simultaneous localization and mapping (SLAM) based on visual odometry, mismatch of features can lead to high uncertainty and inaccurate state estimation. Although RANSAC (RANdom SAmple Consensus) can reject the outlier with iterative sampling among all feature points, it only eliminate the mismatch instead of finding a better match to replace it. In this paper, we introduce an algorithm to reject the mismatch in visual odometry and find a better match if possible. Our approach start with a self-match of latest camera frame in order to detect the danger-point probably leading to mismatch for every feature. KLT (Kanade-Lucas-Tomasi) optical flow tracking method is used to predict the motion of danger-point in next frame, where we form a danger-area of mismatch. We additionally apply suppression in this area by adding an extra Hamming distance in Gaussian distribution to the points in the area. Therefore, mismatch can be removed with extra Hamming distance added. We integrate the algorithm on ROS (Robot Operating System) and record a series of video data sets. Then we apply our algorithm to the video stream and successfully remove the mismatch difficult to be rejected by RANSAC.
UR - http://www.scopus.com/inward/record.url?scp=85078357696&partnerID=8YFLogxK
U2 - 10.1109/ARSO46408.2019.8948767
DO - 10.1109/ARSO46408.2019.8948767
M3 - Conference contribution
AN - SCOPUS:85078357696
T3 - Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO
SP - 330
EP - 334
BT - 2019 IEEE International Conference on Advanced Robotics and its Social Impacts, ARSO 2019
PB - IEEE Computer Society
T2 - 15th IEEE International Conference on Advanced Robotics and its Social Impacts, ARSO 2019
Y2 - 31 October 2019 through 2 November 2019
ER -