TY - JOUR
T1 - Steering Angle-Guided Multimodal Fusion Lane Detection for Autonomous Driving
AU - Gong, Yan
AU - Zhang, Xinyu
AU - Lu, Jianli
AU - Jiang, Xinmin
AU - Wang, Zichen
AU - Liu, Hao
AU - Li, Zhiwei
AU - Wang, Li
AU - Yang, Qingshan
AU - Wu, Xingang
N1 - Publisher Copyright:
© 2024 IEEE. All rights reserved,
PY - 2025
Y1 - 2025
N2 - Lane detection is a critical part of autonomous driving technology. When difficult situations are encountered (i.e., adverse light, severe occlusion), the lane detection task is still challenging. However, previous methods strongly depend on the extracted image features and ignore other features. It is necessary to consider the information from other modalities to assist the model for lane detection, especially in the task of curved lane detection. In this paper, considering that the vehicle steering angle is closely related to the visual feature of lane lines, we propose a novel model named Image-Angle Fusion Network (IAFNet) to solve the lane detection problem by fusing vehicle steering angle features with image features. To make the steering angle features better match the image features, we use the tensor outer product to extend the dimensionality of the steering angle information. A lightweight Image-Angle cross-attention module (LIA-CAM) is proposed to learn the implicit relationship between steering angles and visual features of lane lines, aimed at improving the performance of our model in difficult situations. To guide the network to retain the correct steering angle information, we introduced regression prediction loss of steering angle. Besides, we also released a new dataset based on the Udacity dataset: ImageAngle-Udacity (IA-Udacity) dataset. Extensive experiments on the IA-Udacity dataset show that our method outperforms the current state-of-the-art methods showing both higher efficiency and accuracy. Code and data are available on https://github.com/gongyan1/LIA-CAM.
AB - Lane detection is a critical part of autonomous driving technology. When difficult situations are encountered (i.e., adverse light, severe occlusion), the lane detection task is still challenging. However, previous methods strongly depend on the extracted image features and ignore other features. It is necessary to consider the information from other modalities to assist the model for lane detection, especially in the task of curved lane detection. In this paper, considering that the vehicle steering angle is closely related to the visual feature of lane lines, we propose a novel model named Image-Angle Fusion Network (IAFNet) to solve the lane detection problem by fusing vehicle steering angle features with image features. To make the steering angle features better match the image features, we use the tensor outer product to extend the dimensionality of the steering angle information. A lightweight Image-Angle cross-attention module (LIA-CAM) is proposed to learn the implicit relationship between steering angles and visual features of lane lines, aimed at improving the performance of our model in difficult situations. To guide the network to retain the correct steering angle information, we introduced regression prediction loss of steering angle. Besides, we also released a new dataset based on the Udacity dataset: ImageAngle-Udacity (IA-Udacity) dataset. Extensive experiments on the IA-Udacity dataset show that our method outperforms the current state-of-the-art methods showing both higher efficiency and accuracy. Code and data are available on https://github.com/gongyan1/LIA-CAM.
KW - Lane detection
KW - convolutional neural network
KW - deep learning
KW - multimodal fusion
KW - steering angle
UR - https://www.scopus.com/pages/publications/85213420486
U2 - 10.1109/TITS.2024.3507536
DO - 10.1109/TITS.2024.3507536
M3 - Article
AN - SCOPUS:85213420486
SN - 1524-9050
VL - 26
SP - 1470
EP - 1481
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 2
ER -