LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss

Ying Fu*, Yang Hong, Linwei Chen, Shaodi You

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

70 Citations (Scopus)

Abstract

Low-light image enhancement aims to recover normal-light images from the images captured under very dim environments. Existing methods cannot well handle the noise, color bias and over-exposure problem, and fail to ensure visual quality when lacking paired training data. To address these problems, we propose a novel unsupervised low-light image enhancement network named LE-GAN, which is based on generative adversarial networks and is trained with unpaired low/normal-light images. Specifically, we design an illumination-aware attention module that enhances the feature extraction of the network to address the problems of noise and color bias, as well as improve the visual quality. We further propose a novel identity invariant loss to address the over-exposure problem to make the network learn to enhance low-light images adaptively. Extensive experiments show that the proposed method can achieve promising results. Furthermore, we collect a large-scale low-light dataset named Paired Normal/Lowlight Images (PNLI). It consists of 2,000 pairs of low/normal-light images captured in various real-world scenes, which can provide the research community with a high-quality dataset to advance the development of this field.

Original languageEnglish
Article number108010
JournalKnowledge-Based Systems
Volume240
DOIs
Publication statusPublished - 15 Mar 2022

Keywords

  • Identity invariant loss
  • Illumination-aware attention module
  • Low-light image enhancement
  • Paired normal/low-light images dataset

Fingerprint

Dive into the research topics of 'LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss'. Together they form a unique fingerprint.

Cite this