Building and optimization of 3D semantic map based on Lidar and camera fusion

Jing Li, Xin Zhang, Jiehao Li*, Yanyu Liu, Junzheng Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

90 引用 (Scopus)

摘要

When considering the robot application of the complex scenarios, the traditional geometric maps are insufficient because of the lack of interactions with the environment. In this paper, a three-dimensional (3D) semantic map with large-scale and accurate integrating Lidar and camera information is presented to achieve real-time road scenes. Firstly, simultaneous localization and mapping (SLAM) is performed to locate the robot position with the multi-sensor fusion of the Lidar and inertial measurement unit (IMU), and the map of the surrounding scenes is constructed while the robot is moving. Moreover, a convolutional neural networks (CNNs)-based semantic segmentation of images is employed to develop the semantic map of the environment. Following the synchronization of the time and space, the sensor fusion of Lidar and camera are used to generate the semantic labeled frame of point clouds and then create a semantic map in term of the posture. Besides, improving the capacity of classification, a higher-order 3D full connection conditional random fields (CRFs) method is utilized to optimize the semantic map. Finally, extensive experiment results evaluated on the KITTI dataset have illustrated the effectiveness of the proposed method.

源语言英语
页(从-至)394-407
页数14
期刊Neurocomputing
409
DOI
出版状态已出版 - 7 10月 2020

指纹

探究 'Building and optimization of 3D semantic map based on Lidar and camera fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此