TY - GEN
T1 - A Multi-Scale Layer-Channel Attention Network for Image Super-Resolution
AU - Wang, Jian
AU - Hirota, Kaoru
AU - Pan, Bei
AU - Dai, Yaping
AU - Jia, Zhiyang
AU - Jiang, Naifu
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Recently, image super-resolution (SR) methods widely employ deep convolutional neural networks (CNNs) to improve the performance of reconstructed images through extracting informative features. However, the most of existing SR networks ignore the interdependencies among layers from different modules, which might hinder the further improvement of representational ability. In this paper, a newly designed multi-scale layer-channel attention network (MSLCAN) is proposed for single image super resolution (SISR) task. Our contributions are twofold. First, the layer-channel attention module (LCAM) can adaptively emphasize the correlations among different channels and interrelationships among multi-scale layers. Second, the multi-scale feature extraction module (MSFEM) effectively extracts adequate image information and enhances network ability of feature representation. In the experiments on five standard benchmarks, we have observed that our MSLCAN outperforms other SISR methods on quantitative and perceptual quality.
AB - Recently, image super-resolution (SR) methods widely employ deep convolutional neural networks (CNNs) to improve the performance of reconstructed images through extracting informative features. However, the most of existing SR networks ignore the interdependencies among layers from different modules, which might hinder the further improvement of representational ability. In this paper, a newly designed multi-scale layer-channel attention network (MSLCAN) is proposed for single image super resolution (SISR) task. Our contributions are twofold. First, the layer-channel attention module (LCAM) can adaptively emphasize the correlations among different channels and interrelationships among multi-scale layers. Second, the multi-scale feature extraction module (MSFEM) effectively extracts adequate image information and enhances network ability of feature representation. In the experiments on five standard benchmarks, we have observed that our MSLCAN outperforms other SISR methods on quantitative and perceptual quality.
KW - Deep neural network
KW - Image super-resolution
KW - Layer-channel attention
KW - Multi-scale feature
UR - http://www.scopus.com/inward/record.url?scp=85128068699&partnerID=8YFLogxK
U2 - 10.1109/CAC53003.2021.9727699
DO - 10.1109/CAC53003.2021.9727699
M3 - Conference contribution
AN - SCOPUS:85128068699
T3 - Proceeding - 2021 China Automation Congress, CAC 2021
SP - 3740
EP - 3745
BT - Proceeding - 2021 China Automation Congress, CAC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 China Automation Congress, CAC 2021
Y2 - 22 October 2021 through 24 October 2021
ER -