TY - GEN
T1 - Scale-Aware Distillation Network for Lightweight Image Super-Resolution
AU - Lu, Haowei
AU - Lu, Yao
AU - Li, Gongping
AU - Sun, Yanbei
AU - Wang, Shunzhou
AU - Li, Yugang
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Many lightweight models have achieved great progress in single image super-resolution. However, their parameters are still too many to be applied in practical applications, and it still has space for parameter reduction. Meanwhile, multi-scale features are usually underutilized by researchers, which are better for multi-scale regions’ reconstruction. With the renaissance of deep learning, convolution neural network based methods has prompted many computer vision tasks (e.g., video object segmentation [21, 38, 40], human parsing [39], human-object interaction detection [39]) to achieve significant progresses. To solve this limitation, in this paper, we propose a lightweight super-resolution network named scale-aware distillation network (SDNet). SDNet is built on many stacked scale-aware distillation blocks (SDB), which contain a scale-aware distillation unit (SDU) and a context enhancement (CE) layer. Specifically, SDU enriches the hierarchical features at a granular level via grouped convolution. Meanwhile, the CE layer further enhances the multi-scale feature representation from SDU by context learning to extract more discriminative information. Extensive experiments are performed on commonly-used super-resolution datasets, and our method achieves promising results against other state-of-the-art methods with fewer parameters.
AB - Many lightweight models have achieved great progress in single image super-resolution. However, their parameters are still too many to be applied in practical applications, and it still has space for parameter reduction. Meanwhile, multi-scale features are usually underutilized by researchers, which are better for multi-scale regions’ reconstruction. With the renaissance of deep learning, convolution neural network based methods has prompted many computer vision tasks (e.g., video object segmentation [21, 38, 40], human parsing [39], human-object interaction detection [39]) to achieve significant progresses. To solve this limitation, in this paper, we propose a lightweight super-resolution network named scale-aware distillation network (SDNet). SDNet is built on many stacked scale-aware distillation blocks (SDB), which contain a scale-aware distillation unit (SDU) and a context enhancement (CE) layer. Specifically, SDU enriches the hierarchical features at a granular level via grouped convolution. Meanwhile, the CE layer further enhances the multi-scale feature representation from SDU by context learning to extract more discriminative information. Extensive experiments are performed on commonly-used super-resolution datasets, and our method achieves promising results against other state-of-the-art methods with fewer parameters.
KW - Context learning
KW - Image super-resolution
KW - Lightweight network
KW - Multi-scale feature learning
UR - http://www.scopus.com/inward/record.url?scp=85118213710&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-88010-1_11
DO - 10.1007/978-3-030-88010-1_11
M3 - Conference contribution
AN - SCOPUS:85118213710
SN - 9783030880095
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 128
EP - 139
BT - Pattern Recognition and Computer Vision - 4th Chinese Conference, PRCV 2021, Proceedings
A2 - Ma, Huimin
A2 - Wang, Liang
A2 - Zhang, Changshui
A2 - Wu, Fei
A2 - Tan, Tieniu
A2 - Wang, Yaonan
A2 - Lai, Jianhuang
A2 - Zhao, Yao
PB - Springer Science and Business Media Deutschland GmbH
T2 - 4th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2021
Y2 - 29 October 2021 through 1 November 2021
ER -