GSA-GAN: Global Spatial Attention Generative Adversarial Networks

Lei An, Jiajia Zhao*, Bo Ma

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

This paper proposes a solution to translating the visible images into infrared images, which is challenging in computer vision. Our solution belongs to unsupervised learning, which has recently become popular in image-to-image translation. However, existing methods do not produce satisfactory results because (1) most existing methods are mainly used in entertainment scenarios with single scenes and low complexity. The problem solved by this article is more diverse and more complicated. (2) The infrared response of objects depends not only on itself but also on the current environment, and existing methods cannot correlate long-range dependent objects. In this paper, We propose Global Spatial Attention (GSA), which enhances dependence between long-range objects and improves the synthesized image quality. Compared with other methods, GSA can save more space and time. Moreover, we introduce the idea of subspace learning into the neural network to make training more stable. Our method takes unpaired visible images and infrared images for training, which are easy to collect. Experimental results show that our method can generate high-quality infrared images from visible images and outperforms state-of-the-art methods.

Original languageEnglish
Pages (from-to)274-281
Number of pages8
JournalNeurocomputing
Volume437
DOIs
Publication statusPublished - 21 May 2021

Keywords

  • Generative adversarial networks
  • Image to image translation
  • Infrared image
  • Visible image

Fingerprint

Dive into the research topics of 'GSA-GAN: Global Spatial Attention Generative Adversarial Networks'. Together they form a unique fingerprint.

Cite this