TY - JOUR
T1 - Deep Learning for Image and Point Cloud Fusion in Autonomous Driving
T2 - A Review
AU - Cui, Yaodong
AU - Chen, Ren
AU - Chu, Wenbo
AU - Chen, Long
AU - Tian, Daxin
AU - Li, Ying
AU - Cao, Dongpu
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2022/2/1
Y1 - 2022/2/1
N2 - Autonomous vehicles were experiencing rapid development in the past few years. However, achieving full autonomy is not a trivial task, due to the nature of the complex and dynamic driving environment. Therefore, autonomous vehicles are equipped with a suite of different sensors to ensure robust, accurate environmental perception. In particular, the camera-LiDAR fusion is becoming an emerging research theme. However, so far there has been no critical review that focuses on deep-learning-based camera-LiDAR fusion methods. To bridge this gap and motivate future research, this article devotes to review recent deep-learning-based data fusion approaches that leverage both image and point cloud. This review gives a brief overview of deep learning on image and point cloud data processing. Followed by in-depth reviews of camera-LiDAR fusion methods in depth completion, object detection, semantic segmentation, tracking and online cross-sensor calibration, which are organized based on their respective fusion levels. Furthermore, we compare these methods on publicly available datasets. Finally, we identified gaps and over-looked challenges between current academic researches and real-world applications. Based on these observations, we provide our insights and point out promising research directions.
AB - Autonomous vehicles were experiencing rapid development in the past few years. However, achieving full autonomy is not a trivial task, due to the nature of the complex and dynamic driving environment. Therefore, autonomous vehicles are equipped with a suite of different sensors to ensure robust, accurate environmental perception. In particular, the camera-LiDAR fusion is becoming an emerging research theme. However, so far there has been no critical review that focuses on deep-learning-based camera-LiDAR fusion methods. To bridge this gap and motivate future research, this article devotes to review recent deep-learning-based data fusion approaches that leverage both image and point cloud. This review gives a brief overview of deep learning on image and point cloud data processing. Followed by in-depth reviews of camera-LiDAR fusion methods in depth completion, object detection, semantic segmentation, tracking and online cross-sensor calibration, which are organized based on their respective fusion levels. Furthermore, we compare these methods on publicly available datasets. Finally, we identified gaps and over-looked challenges between current academic researches and real-world applications. Based on these observations, we provide our insights and point out promising research directions.
KW - Camera-LiDAR fusion
KW - deep learning
KW - depth completion
KW - object detection
KW - semantic segmentation
KW - sensor fusion
KW - tracking
UR - http://www.scopus.com/inward/record.url?scp=85103166057&partnerID=8YFLogxK
U2 - 10.1109/TITS.2020.3023541
DO - 10.1109/TITS.2020.3023541
M3 - Review article
AN - SCOPUS:85103166057
SN - 1524-9050
VL - 23
SP - 722
EP - 739
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 2
ER -