Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

Li Wang, Lijun Zhao*, Guanglei Huo, Ruifeng Li, Zhenghua Hou, Pan Luo, Zhenye Sun, Ke Wang, Chenguang Yang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a "side" recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the "side" recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera). Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.

Original languageEnglish
Article number1627185
JournalComplexity
Volume2018
DOIs
Publication statusPublished - 22 Apr 2018
Externally publishedYes

Fingerprint

Dive into the research topics of 'Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots'. Together they form a unique fingerprint.

Cite this