TY - GEN
T1 - An Image Super-Resolution Network Using Multiple Attention Mechanisms
AU - Huang, Jinlong
AU - Fu, Tie
AU - Zhu, Wei
AU - Luo, Huifu
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Single image super-resolution (SR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) images. Existing SR algorithms often lose information when dealing with complex textures and details, and the model's feature weighting in different regions is unreasonable, leading to poor reconstruction effects. Recently, networks based on attention mechanisms have shown excellent performance. Attention mechanisms can enhance the model's use of critical input information and reduce the emphasis on non-critical information. In this study, we improve the traditional transformer framework by incorporating channel and spatial attention mechanisms in the deep feature extraction stage to capture the channel and spatial relationships of each feature map, enhancing the model's high-dimensional information extraction capability. We also utilize a pixel attention mechanism to improve the up-sampling module, allowing the model to retain more detail during up-sampling. The validation on benchmark datasets demonstrates that our method outperforms other models.
AB - Single image super-resolution (SR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) images. Existing SR algorithms often lose information when dealing with complex textures and details, and the model's feature weighting in different regions is unreasonable, leading to poor reconstruction effects. Recently, networks based on attention mechanisms have shown excellent performance. Attention mechanisms can enhance the model's use of critical input information and reduce the emphasis on non-critical information. In this study, we improve the traditional transformer framework by incorporating channel and spatial attention mechanisms in the deep feature extraction stage to capture the channel and spatial relationships of each feature map, enhancing the model's high-dimensional information extraction capability. We also utilize a pixel attention mechanism to improve the up-sampling module, allowing the model to retain more detail during up-sampling. The validation on benchmark datasets demonstrates that our method outperforms other models.
KW - attention mechanism
KW - channel and spatial attention
KW - feature extraction
KW - single image super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85207502949&partnerID=8YFLogxK
U2 - 10.1109/EEI63073.2024.10696216
DO - 10.1109/EEI63073.2024.10696216
M3 - Conference contribution
AN - SCOPUS:85207502949
T3 - 2024 6th International Conference on Electronic Engineering and Informatics, EEI 2024
SP - 1398
EP - 1404
BT - 2024 6th International Conference on Electronic Engineering and Informatics, EEI 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th International Conference on Electronic Engineering and Informatics, EEI 2024
Y2 - 28 June 2024 through 30 June 2024
ER -