UPanGAN: Unsupervised pansharpening based on the spectral and spatial loss constrained Generative Adversarial Network

Qizhi Xu, Yuan Li*, Jinyan Nie, Qingjie Liu, Mengyao Guo

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

30 引用 (Scopus)

摘要

It is observed that, in most of the CNN-based pansharpening methods, the multispectral (MS) images are taken as the ground truth, and the downsampled panchromatic (Pan) and MS images are taken as the training data. However, the trained models from the downsampled images are not suitable for the pansharpening of the MS images with rich spatial and spectral information at their original spatial resolution. To tackle this problem, a novel iterative network based on spectral and textural loss constrained Generative Adversarial Network (GAN) is proposed for pansharpening. First, instead of directly outputting the fused imagery, the GAN focuses on generating the mean difference image. The input of the GAN is a good initial difference image, which will make the network work better. Second, the coarse-to-fine fusion framework is designed to generate the fused imagery. It uses two optimized discriminators to distinguish the generated images, and performs multi-level fusion processing on PAN and MS images to generate the best pansharpening image in full resolution. Finally, the well-designed loss functions are embedded into both the generator and the discriminators to accurately preserve the fidelity of the fused imagery. We validated our method by the images from QuickBird, GaoFen-2 and WorldView-2 satellites. The experimental results demonstrated that the proposed method obtained a better fusion performance than the state-of-the-art methods in both visual comparison and quantitative evaluation.

源语言英语
页(从-至)31-46
页数16
期刊Information Fusion
91
DOI
出版状态已出版 - 3月 2023

指纹

探究 'UPanGAN: Unsupervised pansharpening based on the spectral and spatial loss constrained Generative Adversarial Network' 的科研主题。它们共同构成独一无二的指纹。

引用此