TY - GEN
T1 - Bionic Visual-based Data Conversion for SLAM
AU - Li, Mingzhu
AU - Zhang, Weimin
AU - Shi, Yongliang
AU - Yao, Zhuo
AU - Liang, Zhenshuo
AU - Huang, Qiang
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - Simultaneous localization and mapping (SLAM) is the key function for most mobile robots to achieve autonomous navigation. The traditional visual SLAM uses the camera to acquire data and constructs a sparse or dense 3D map, which is convenient for robot localization but difficult for obstacle avoidance and autonomous navigation. Thus, we propose an innovative data conversion algorithm based on bionic visual characteristics which can construct a two-dimensional accurate map for indoor navigation in this paper. The algorithm has two main parallel threads: Ground Detection and Data Conversion. The ground detection thread detects the ground in real time, and gets the transformation matrix from the camera to the ground based on the geometrical invariability. The data conversion thread first filters the depth data, and then proposes a variable-resolution model based on human visual characteristics, which can keep the conversion time consumption at a low level without affecting the accuracy. Each group of experiments shows that the data converted by our algorithm have high-precision, and can be used to construct the map for navigation accurately.
AB - Simultaneous localization and mapping (SLAM) is the key function for most mobile robots to achieve autonomous navigation. The traditional visual SLAM uses the camera to acquire data and constructs a sparse or dense 3D map, which is convenient for robot localization but difficult for obstacle avoidance and autonomous navigation. Thus, we propose an innovative data conversion algorithm based on bionic visual characteristics which can construct a two-dimensional accurate map for indoor navigation in this paper. The algorithm has two main parallel threads: Ground Detection and Data Conversion. The ground detection thread detects the ground in real time, and gets the transformation matrix from the camera to the ground based on the geometrical invariability. The data conversion thread first filters the depth data, and then proposes a variable-resolution model based on human visual characteristics, which can keep the conversion time consumption at a low level without affecting the accuracy. Each group of experiments shows that the data converted by our algorithm have high-precision, and can be used to construct the map for navigation accurately.
UR - http://www.scopus.com/inward/record.url?scp=85064131286&partnerID=8YFLogxK
U2 - 10.1109/ROBIO.2018.8665130
DO - 10.1109/ROBIO.2018.8665130
M3 - Conference contribution
AN - SCOPUS:85064131286
T3 - 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018
SP - 1607
EP - 1612
BT - 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018
Y2 - 12 December 2018 through 15 December 2018
ER -