3D semantic mapping based on convolutional neural networks

Jing Li, Yanyu Liu, Junzheng Wang, Min Yan, Yanzhi Yao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Citations (Scopus)

Abstract

As an important part of environmental perception, maps guarantee the accuracy of intelligent robots in navigation, localization and path planning. The traditional 3D maps mainly focus on the spatial structure of the objects, which lacks the semantic information. A method is proposed in the paper, this method combines convolutional neural networks (CNNs) and Simultaneous Localization and Mapping (SLAM) to create global dense 3D semantic maps for indoor scenes. The deep neural network that includes convolution and deconvolution is designed to predict semantic category of every pixel. RGB-D camera is used to obtain scene information, accomplish localization and build 3D maps simultaneously. the semantic information is integrated into the 3D scene, we present an octree map method to replace traditional point clouds method, which can reduce the error from pose estimation and single frame labeling. By this method, the accuracy of semantic information is greatly improved.

Original languageEnglish
Title of host publicationProceedings of the 37th Chinese Control Conference, CCC 2018
EditorsXin Chen, Qianchuan Zhao
PublisherIEEE Computer Society
Pages9303-9308
Number of pages6
ISBN (Electronic)9789881563941
DOIs
Publication statusPublished - 5 Oct 2018
Event37th Chinese Control Conference, CCC 2018 - Wuhan, China
Duration: 25 Jul 201827 Jul 2018

Publication series

NameChinese Control Conference, CCC
Volume2018-July
ISSN (Print)1934-1768
ISSN (Electronic)2161-2927

Conference

Conference37th Chinese Control Conference, CCC 2018
Country/TerritoryChina
CityWuhan
Period25/07/1827/07/18

Keywords

  • CNNs
  • Octree Map
  • SLAM
  • Semantic Map

Fingerprint

Dive into the research topics of '3D semantic mapping based on convolutional neural networks'. Together they form a unique fingerprint.

Cite this