COSEM: Collaborative Semantic Map Matching Framework for Autonomous Robots

Yufeng Yue, Mingxing Wen, Chunyang Zhao, Yuanzhe Wang*, Danwei Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Relative localization is a fundamental requirement for the coordination of multiple robots. To date, existing research in relative localization mainly depends on the extraction of low-level geometry features such as planes, lines, and points, which may fail in challenging cases when the initial error is large and the overlapping area is low. In this article, a novel approach named collaborative semantic map matching (COSEM) is proposed to estimate the relative transformation between robots. COSEM jointly performs multimodal information fusion, semantic data association, and optimization in a unified framework. First, each robot applies a multimodal information fusion model to generate local semantic maps. Since the correspondences between local maps are latent variables, a flexible semantic data association strategy is proposed using expectation-maximization. Instead of assigning hard geometry data association, semantic association and geometry association are jointly estimated. Then, the minimization of the expected cost results in a rigid transformation matrix between two semantic maps. Evaluations on semantic KITTI benchmarks and real-world experiments show the improved accuracy, convergence, and robustness.

Original languageEnglish
Pages (from-to)3843-3853
Number of pages11
JournalIEEE Transactions on Industrial Electronics
Volume69
Issue number4
DOIs
Publication statusPublished - 1 Apr 2022

Keywords

  • Collaborative robots
  • relative localization
  • semantic mapping

Fingerprint

Dive into the research topics of 'COSEM: Collaborative Semantic Map Matching Framework for Autonomous Robots'. Together they form a unique fingerprint.

Cite this