A Novel Transformer-Based Attention Network for Image Dehazing

Guanlei Gao, Jie Cao, Chun Bao, Qun Hao*, Aoqi Ma, Gang Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

12 引用 (Scopus)

摘要

Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature extraction. Although the performance is greatly improved compared with the prior-based methods, the performance in extracting detailed information is inferior. In this paper, we proposed an image dehazing model built with a convolutional neural network and Transformer, called Transformer for image dehazing (TID). First, we propose a Transformer-based channel attention module (TCAM), using a spatial attention module as its supplement. These two modules form an attention module that enhances channel and spatial features. Second, we use a multiscale parallel residual network as the backbone, which can extract feature information of different scales to achieve feature fusion. We experimented on the RESIDE dataset, and then conducted extensive comparisons and ablation studies with state-of-the-art methods. Experimental results show that our proposed method effectively improves the quality of the restored image, and it is also better than the existing attention modules in performance.

源语言英语
文章编号3428
期刊Sensors
22
9
DOI
出版状态已出版 - 1 5月 2022

指纹

探究 'A Novel Transformer-Based Attention Network for Image Dehazing' 的科研主题。它们共同构成独一无二的指纹。

引用此