Abstract
Hazy images often cause blurring, detail loss and color distortion, which makes it difficult to address the other visual tasks such as tracking, classification and object detection. In recent years, significant advances have been made in image dehazing task, dominated by convolutional neural networks (CNNs). Most existing CNNs methods tend to estimate the transmission map and atmospheric light and then recover the haze-free image based on atmospheric scattering model. However, its dehazing performance is limited due to inaccurate estimation. To this end, we present a new architecture called multi-level features and adaptive fusion network (MFAF-Net) for single image dehazing, which can obtain the haze-free image in an end-to-end manner. For one thing, we utilize a novel context enhanced module as the core of feature extraction, and it combines multi-scale dilation convolution layers with feature attention module, which enables to acquire much informative contextual information. For another, we present a new fusion approach called adaptive fusion module for both low- and high-level feature fusion, and it can provide more flexibility when handling features of inconsistent semantic and level; thus, our network restores images with more detailed information. Experimental results on both synthetic and real-world datasets demonstrate that MFAF-Net outperforms existing state-of-the-art methods in terms of quantitative and qualitative evaluation metrics. The code will be made publicly available on Github.
Original language | English |
---|---|
Pages (from-to) | 2293-2307 |
Number of pages | 15 |
Journal | Visual Computer |
Volume | 40 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2024 |
Keywords
- Adaptive fusion
- Deep learning
- Image dehazing
- Multi-level feature