TY - GEN
T1 - SAME
T2 - 2024 IEEE International Conference on Unmanned Systems, ICUS 2024
AU - Tian, Xinyu
AU - Deng, Yinan
AU - Tang, Yujie
AU - Wang, Jiahui
AU - Dang, Ruina
AU - Yang, Yi
AU - Yue, Yufeng
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The integration of ground and aerial robots, known as the Ground-Air collaborative system, has the potential to handle mapping and navigation tasks in complex environments by utilizing the advantages of multiple perspectives and high maneuverability. The significant differences in the visual fields and angles of UAV and UGV pose a challenge in collaboratively generating a consistently styled map to guide exploration. This paper proposes a method for representing the environment using semantic Octree units as a basis, which are fused to generate a consistent global map. We are the first to connect tightly Ground-Air collaborative mapping and navigation through semantics-centered elements. Through the exploration, we combine UAV's extensive visual coverage with UGV's close-range, precise observations to achieve a multi-layered reconstruction of the scene. The 2D semantic map generated through 3D map projection provides information for path planning, creating a positive feedback loop between high-quality mapping and autonomous exploration. Merging RGB images, depth point clouds, and semantic data, the UAV and UGV independently construct local Octomap maps, which are then merged into a cohesive global map through network communication. Through both simulation and real-world verification, this semantic-centred Ground-Air collaborative approach enhances both the precision of mapping and the efficiency of exploration.
AB - The integration of ground and aerial robots, known as the Ground-Air collaborative system, has the potential to handle mapping and navigation tasks in complex environments by utilizing the advantages of multiple perspectives and high maneuverability. The significant differences in the visual fields and angles of UAV and UGV pose a challenge in collaboratively generating a consistently styled map to guide exploration. This paper proposes a method for representing the environment using semantic Octree units as a basis, which are fused to generate a consistent global map. We are the first to connect tightly Ground-Air collaborative mapping and navigation through semantics-centered elements. Through the exploration, we combine UAV's extensive visual coverage with UGV's close-range, precise observations to achieve a multi-layered reconstruction of the scene. The 2D semantic map generated through 3D map projection provides information for path planning, creating a positive feedback loop between high-quality mapping and autonomous exploration. Merging RGB images, depth point clouds, and semantic data, the UAV and UGV independently construct local Octomap maps, which are then merged into a cohesive global map through network communication. Through both simulation and real-world verification, this semantic-centred Ground-Air collaborative approach enhances both the precision of mapping and the efficiency of exploration.
KW - active mapping
KW - collaborative exploration
KW - ground-air
KW - semantic-centered
UR - http://www.scopus.com/inward/record.url?scp=85218036383&partnerID=8YFLogxK
U2 - 10.1109/ICUS61736.2024.10839908
DO - 10.1109/ICUS61736.2024.10839908
M3 - Conference contribution
AN - SCOPUS:85218036383
T3 - Proceedings of 2024 IEEE International Conference on Unmanned Systems, ICUS 2024
SP - 1923
EP - 1930
BT - Proceedings of 2024 IEEE International Conference on Unmanned Systems, ICUS 2024
A2 - Song, Rong
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 October 2024 through 20 October 2024
ER -