Abstract
Lane segmentation at night is a challenging problem in autonomous driving perception, which is beneficial to improve the robustness of the application. Existing methods has shown great performance in the benchmark dataset, however, they do not consider the bad lighting scenes in practical applications, for example, the performance of the lane segmentation algorithm will be greatly affected at night. In this paper, we propose a novel multi-modal nighttime lane segmentation algorithm, which utilizes camera and LiDAR for complementary information. We illustrate the role of image entropy in showing the distribution of light at night, and propose an adaptive entropy fusion method to obtain the spatial relationship between entropy and modalities to adapt to different lighting scenes. The features of narrow and long lanes are more likely to be lost at night, a lane feature enhancement module is proposed to enhance the network's ability to capture lane features. Extensive experiments and analysis demostrate the effectiveness of our method against the state-of-the-art semantic segmentation and lane segmentation approaches on SHIFT dataset at night. Extensive experiments conducted on SHIFT dataset at night demonstrate that the proposed method achieves the state-of-the-art performance, 88.36%@14.06fps and 87.24%@26.88fps on SHIFT dataset at night, having the capability for real-time applications.
Original language | English |
---|---|
Pages (from-to) | 1-13 |
Number of pages | 13 |
Journal | IEEE Transactions on Intelligent Vehicles |
DOIs | |
Publication status | Accepted/In press - 2024 |
Externally published | Yes |
Keywords
- Adaptation models
- Adaptive Entropy Multi-modal Fusion
- Cameras
- Entropy
- Feature extraction
- Fuses
- Lane Feature Enhance
- Lane Segmentation
- Laser radar
- Lighting
- Multi-modal
- Nighttime Segmentation