TY - JOUR
T1 - RAIENet
T2 - End-to-End Multitasking Road All Information Extractor
AU - Chen, Xuemei
AU - Ren, Pengfei
AU - Xu, Zeyuan
AU - Xu, Shuyuan
AU - Jia, Yaohan
N1 - Publisher Copyright:
© 2024 Beijing Institute of Technology. All rights reserved.
PY - 2024
Y1 - 2024
N2 - Road lanes and markings are the bases for autonomous driving environment perception. In this paper, we propose an end-to-end multi-task network, Road All Information Extractor named RAIENet, which aims to extract the full information of the road surface including road lanes, road markings and their correspondences. Based on the prior knowledge of pavement information, we explore and use the deep progressive relationship between lane segmentation and pavement marking detection. Then, different attention mechanisms are adapted for different tasks. A lane detection accuracy of 0.807 F1-score and a ground marking accuracy of 0.971 mean average precision at intersection over union (IOU) threshold 0.5 were achieved on the newly labeled see more on road plus (CeyMo+) dataset. Of course, we also validated it on two well-known datasets Berkeley DeepDrive 100K (BDD100K) and CULane. In addition, a post-processing method for generating bird’s eye view lane (BEVLane) using lidar point cloud information is proposed, which is used for the construction of high-definition maps and subsequent decision-making planning. The code and data are available at https://github.com/mayberpf/RAIEnet.
AB - Road lanes and markings are the bases for autonomous driving environment perception. In this paper, we propose an end-to-end multi-task network, Road All Information Extractor named RAIENet, which aims to extract the full information of the road surface including road lanes, road markings and their correspondences. Based on the prior knowledge of pavement information, we explore and use the deep progressive relationship between lane segmentation and pavement marking detection. Then, different attention mechanisms are adapted for different tasks. A lane detection accuracy of 0.807 F1-score and a ground marking accuracy of 0.971 mean average precision at intersection over union (IOU) threshold 0.5 were achieved on the newly labeled see more on road plus (CeyMo+) dataset. Of course, we also validated it on two well-known datasets Berkeley DeepDrive 100K (BDD100K) and CULane. In addition, a post-processing method for generating bird’s eye view lane (BEVLane) using lidar point cloud information is proposed, which is used for the construction of high-definition maps and subsequent decision-making planning. The code and data are available at https://github.com/mayberpf/RAIEnet.
KW - autonomous driving
KW - lane segmentation
KW - multitasking
KW - pavement information
KW - pavement marking detection
UR - http://www.scopus.com/inward/record.url?scp=85215754844&partnerID=8YFLogxK
U2 - 10.15918/j.jbit1004-0579.2024.001
DO - 10.15918/j.jbit1004-0579.2024.001
M3 - Article
AN - SCOPUS:85215754844
SN - 1004-0579
VL - 33
SP - 374
EP - 388
JO - Journal of Beijing Institute of Technology (English Edition)
JF - Journal of Beijing Institute of Technology (English Edition)
IS - 5
ER -