SAR-to-Optical Image Translating Through Generate-Validate Adversarial Networks

Hao Shi, Bocheng Zhang, Yupei Wang*, Zihan Cui, Liang Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Synthetic aperture radar (SAR) has the advantages of high resolution in all-weather and all-day. However, SAR images are hard to be understood, due to their unique imaging mechanism. The SAR to optical image translation can assist in interpreting and has become a topic of growing interest in the field of remote sensing. In this letter, a SAR to optical image translation network is proposed, called generate-validate adversarial networks (GVANs). More specifically, there are two Pix2Pix networks form the cyclic structure. The validate module is employed to increase the training process and improve the edge retention ability. In order to improve multidomain images adaptability, the embedded layer is proposed. Additionally, the dilation convolution layer is employed in the generator, which is more suitable for the characteristics of SAR images. The proposed method has experimented on the SEN1-2 dataset. The result demonstrates the superiority of the proposed method over state-of-the-art methods.

Original languageEnglish
Article number4506905
JournalIEEE Geoscience and Remote Sensing Letters
Volume19
DOIs
Publication statusPublished - 2022

Keywords

  • Generative adversarial networks (GVANs)
  • SAR-to-optical image translating
  • U-Net
  • synthetic aperture radar (SAR)

Fingerprint

Dive into the research topics of 'SAR-to-Optical Image Translating Through Generate-Validate Adversarial Networks'. Together they form a unique fingerprint.

Cite this