COSEM: Collaborative Semantic Map Matching Framework for Autonomous Robots

Yufeng Yue, Mingxing Wen, Chunyang Zhao, Yuanzhe Wang*, Danwei Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

9 引用 (Scopus)

摘要

Relative localization is a fundamental requirement for the coordination of multiple robots. To date, existing research in relative localization mainly depends on the extraction of low-level geometry features such as planes, lines, and points, which may fail in challenging cases when the initial error is large and the overlapping area is low. In this article, a novel approach named collaborative semantic map matching (COSEM) is proposed to estimate the relative transformation between robots. COSEM jointly performs multimodal information fusion, semantic data association, and optimization in a unified framework. First, each robot applies a multimodal information fusion model to generate local semantic maps. Since the correspondences between local maps are latent variables, a flexible semantic data association strategy is proposed using expectation-maximization. Instead of assigning hard geometry data association, semantic association and geometry association are jointly estimated. Then, the minimization of the expected cost results in a rigid transformation matrix between two semantic maps. Evaluations on semantic KITTI benchmarks and real-world experiments show the improved accuracy, convergence, and robustness.

源语言英语
页(从-至)3843-3853
页数11
期刊IEEE Transactions on Industrial Electronics
69
4
DOI
出版状态已出版 - 1 4月 2022

指纹

探究 'COSEM: Collaborative Semantic Map Matching Framework for Autonomous Robots' 的科研主题。它们共同构成独一无二的指纹。

引用此