TY - GEN
T1 - PLP-SLAM
T2 - 18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024
AU - Zhu, Yeqing
AU - Zhao, Liangyu
AU - Zhao, Qingjie
AU - Wu, Zhenyu
AU - Shen, Hongming
AU - Wang, Danwei
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - For indoor environments, prior point-based visual SLAM cannot be processed in real time under low texture and illumination. To address this issue, this work proposes PLP-SLAM (Point-Line-Plane-SLAM) with RGB-D camera. Firstly, point and line features are detected in RGB images. For line features, establish length suppression and near line merge strategy to improve the line extraction quality. Secondly, plane features are extracted based on agglomerative hierarchical clustering method in point cloud obtained by RGB-D camera. Point clouds are divided into several nodes, unlike prior methods spend a lot of time to estimate the normal vector for each individual point, this work assumes that points within each node sharing the same plane normal vector, which can significantly improve the computational efficiency. Thirdly, sparse maps including points, lines and planes are established, meanwhile the scenes are reconstructed by creating the dense maps to show plan features directly. Finally, the performance of proposed method is compared against the state-of-the-art SLAM on public datasets to evaluate the pose estimation. All modules are run in real-time on a CPU, experiments clarify that PLP-SLAM can significantly enhance the robustness of 6DoF pose of the camera and simultaneously creating more detailed maps of the environment.
AB - For indoor environments, prior point-based visual SLAM cannot be processed in real time under low texture and illumination. To address this issue, this work proposes PLP-SLAM (Point-Line-Plane-SLAM) with RGB-D camera. Firstly, point and line features are detected in RGB images. For line features, establish length suppression and near line merge strategy to improve the line extraction quality. Secondly, plane features are extracted based on agglomerative hierarchical clustering method in point cloud obtained by RGB-D camera. Point clouds are divided into several nodes, unlike prior methods spend a lot of time to estimate the normal vector for each individual point, this work assumes that points within each node sharing the same plane normal vector, which can significantly improve the computational efficiency. Thirdly, sparse maps including points, lines and planes are established, meanwhile the scenes are reconstructed by creating the dense maps to show plan features directly. Finally, the performance of proposed method is compared against the state-of-the-art SLAM on public datasets to evaluate the pose estimation. All modules are run in real-time on a CPU, experiments clarify that PLP-SLAM can significantly enhance the robustness of 6DoF pose of the camera and simultaneously creating more detailed maps of the environment.
UR - http://www.scopus.com/inward/record.url?scp=85217439488&partnerID=8YFLogxK
U2 - 10.1109/ICARCV63323.2024.10821614
DO - 10.1109/ICARCV63323.2024.10821614
M3 - Conference contribution
AN - SCOPUS:85217439488
T3 - 2024 18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024
SP - 245
EP - 250
BT - 2024 18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 12 December 2024 through 15 December 2024
ER -