RAIENet: End-to-End Multitasking Road All Information Extractor

Xuemei Chen*, Pengfei Ren, Zeyuan Xu, Shuyuan Xu, Yaohan Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Road lanes and markings are the bases for autonomous driving environment perception. In this paper, we propose an end-to-end multi-task network, Road All Information Extractor named RAIENet, which aims to extract the full information of the road surface including road lanes, road markings and their correspondences. Based on the prior knowledge of pavement information, we explore and use the deep progressive relationship between lane segmentation and pavement marking detection. Then, different attention mechanisms are adapted for different tasks. A lane detection accuracy of 0.807 F1-score and a ground marking accuracy of 0.971 mean average precision at intersection over union (IOU) threshold 0.5 were achieved on the newly labeled see more on road plus (CeyMo+) dataset. Of course, we also validated it on two well-known datasets Berkeley DeepDrive 100K (BDD100K) and CULane. In addition, a post-processing method for generating bird’s eye view lane (BEVLane) using lidar point cloud information is proposed, which is used for the construction of high-definition maps and subsequent decision-making planning. The code and data are available at https://github.com/mayberpf/RAIEnet.

Original languageEnglish
Pages (from-to)374-388
Number of pages15
JournalJournal of Beijing Institute of Technology (English Edition)
Volume33
Issue number5
DOIs
Publication statusPublished - 2024

Keywords

  • autonomous driving
  • lane segmentation
  • multitasking
  • pavement information
  • pavement marking detection

Fingerprint

Dive into the research topics of 'RAIENet: End-to-End Multitasking Road All Information Extractor'. Together they form a unique fingerprint.

Cite this