(SARN)spatial-wise attention residual network for image super-resolution

Wenling Shi, Huiqian Du*, Wenbo Mei, Zhifeng Ma

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Citations (Scopus)

Abstract

Recent research suggests that attention mechanism is capable of improving performance of deep learning-based single image super-resolution (SISR) methods. In this work, we propose a deep spatial-wise attention residual network (SARN) for SISR. Specifically, we propose a novel spatial attention block (SAB) to rescale pixel-wise features by explicitly modeling interdependencies between pixels on each feature map, encoding where (i.e., attentive spatial pixels in feature map) the visual attention is located. A modified patch-based non-local block can be inserted in SAB to capture long-distance spatial contextual information and relax the local neighborhood constraint. Furthermore, we design a bottleneck spatial attention module to widen the network so that more information is allowed to pass. Meanwhile, we adopt local and global residual connections in SISR to make the network focus on learning valuable high-frequency information. Extensive experiments show the superiority of the proposed SARN over the state-of-art methods on benchmark datasets in both accuracy and visual quality.

Original languageEnglish
Pages (from-to)1569-1580
Number of pages12
JournalVisual Computer
Volume37
Issue number6
DOIs
Publication statusPublished - Jun 2021

Keywords

  • Non-local block
  • Residual network
  • Spatial attention
  • Super-resolution

Fingerprint

Dive into the research topics of '(SARN)spatial-wise attention residual network for image super-resolution'. Together they form a unique fingerprint.

Cite this