TY - JOUR
T1 - COSEM
T2 - Collaborative Semantic Map Matching Framework for Autonomous Robots
AU - Yue, Yufeng
AU - Wen, Mingxing
AU - Zhao, Chunyang
AU - Wang, Yuanzhe
AU - Wang, Danwei
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - Relative localization is a fundamental requirement for the coordination of multiple robots. To date, existing research in relative localization mainly depends on the extraction of low-level geometry features such as planes, lines, and points, which may fail in challenging cases when the initial error is large and the overlapping area is low. In this article, a novel approach named collaborative semantic map matching (COSEM) is proposed to estimate the relative transformation between robots. COSEM jointly performs multimodal information fusion, semantic data association, and optimization in a unified framework. First, each robot applies a multimodal information fusion model to generate local semantic maps. Since the correspondences between local maps are latent variables, a flexible semantic data association strategy is proposed using expectation-maximization. Instead of assigning hard geometry data association, semantic association and geometry association are jointly estimated. Then, the minimization of the expected cost results in a rigid transformation matrix between two semantic maps. Evaluations on semantic KITTI benchmarks and real-world experiments show the improved accuracy, convergence, and robustness.
AB - Relative localization is a fundamental requirement for the coordination of multiple robots. To date, existing research in relative localization mainly depends on the extraction of low-level geometry features such as planes, lines, and points, which may fail in challenging cases when the initial error is large and the overlapping area is low. In this article, a novel approach named collaborative semantic map matching (COSEM) is proposed to estimate the relative transformation between robots. COSEM jointly performs multimodal information fusion, semantic data association, and optimization in a unified framework. First, each robot applies a multimodal information fusion model to generate local semantic maps. Since the correspondences between local maps are latent variables, a flexible semantic data association strategy is proposed using expectation-maximization. Instead of assigning hard geometry data association, semantic association and geometry association are jointly estimated. Then, the minimization of the expected cost results in a rigid transformation matrix between two semantic maps. Evaluations on semantic KITTI benchmarks and real-world experiments show the improved accuracy, convergence, and robustness.
KW - Collaborative robots
KW - relative localization
KW - semantic mapping
UR - http://www.scopus.com/inward/record.url?scp=85104234467&partnerID=8YFLogxK
U2 - 10.1109/TIE.2021.3070497
DO - 10.1109/TIE.2021.3070497
M3 - Article
AN - SCOPUS:85104234467
SN - 0278-0046
VL - 69
SP - 3843
EP - 3853
JO - IEEE Transactions on Industrial Electronics
JF - IEEE Transactions on Industrial Electronics
IS - 4
ER -