Image-similarity-based convolutional neural network for robot visual relocalization

Li Wang, Ruifeng Li, Jingwen Sun*, Hock Soon Seah, Chee Kwang Quah, Lijun Zhao, Budianto Tandianus

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Convolutional neural network (CNN)-based methods, which train an end-to-end model to regress a six degree of freedom (DoF) pose of a robot from a single red–green–blue (RGB) image, have been developed to overcome the poor robustness of robot visual relocalization recently. However, the pose precision becomes low when the test image is dissimilar to training images. In this paper, we propose a novel method, named image-similarity-based CNN, which considers the image similarity of an input image during the CNN training. The higher the similarity of the input image, the higher precision we can achieve. Therefore, we crop the input image into several small image blocks, and the similarity between each cropped image block and training dataset images is measured by employing a feature vector in a fully connected CNN layer. Finally, the most similar image is selected to regress the pose. A genetic algorithm is utilized to determine the cropped position. Experiments on both open-source dataset 7-Scenes and two actual indoor environments are conducted. The results show that the proposed algorithm leads to better results and reduces large regression errors effectively compared with existing solutions.

Original languageEnglish
Pages (from-to)1245-1259
Number of pages15
JournalSensors and Materials
Volume32
Issue number4
DOIs
Publication statusPublished - 10 Apr 2020
Externally publishedYes

Keywords

  • CNN
  • Image similarity
  • Visual relocalization

Fingerprint

Dive into the research topics of 'Image-similarity-based convolutional neural network for robot visual relocalization'. Together they form a unique fingerprint.

Cite this