Lane Detection in Low-light Conditions Using an Efficient Data Enhancement: Light Conditions Style Transfer

Tong Liu*, Zhaowei Chen, Yi Yang, Zehao Wu, Haowei Li

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

76 Citations (Scopus)

Abstract

Nowadays, deep learning techniques are widely used for lane detection, but application in low-light conditions remains a challenge until this day. Although multi-task learning and contextual-information-based methods have been proposed to solve the problem, they either require additional manual annotations or introduce extra inference overhead respectively. In this paper, we propose a style-transfer-based data enhancement method, which uses Generative Adversarial Networks (GANs) to generate images in low-light conditions, that increases the environmental adaptability of the lane detector. Our solution consists of three parts: the proposed SIM-CycleGAN, light conditions style transfer and lane detection network. It does not require additional manual annotations nor extra inference overhead. We validated our methods on the lane detection benchmark CULane using ERFNet. Empirically, lane detection model trained using our method demonstrated adaptability in low-light conditions and robustness in complex scenarios. Our code for this paper will be publicly available.

Original languageEnglish
Pages1394-1399
Number of pages6
DOIs
Publication statusPublished - 2020
Event31st IEEE Intelligent Vehicles Symposium, IV 2020 - Virtual, Las Vegas, United States
Duration: 19 Oct 202013 Nov 2020

Conference

Conference31st IEEE Intelligent Vehicles Symposium, IV 2020
Country/TerritoryUnited States
CityVirtual, Las Vegas
Period19/10/2013/11/20

Fingerprint

Dive into the research topics of 'Lane Detection in Low-light Conditions Using an Efficient Data Enhancement: Light Conditions Style Transfer'. Together they form a unique fingerprint.

Cite this