An Unsupervised SAR and Optical Image Fusion Network Based on Structure-Texture Decomposition

Yuanxin Ye, Wanchun Liu, Liang Zhou, Tao Peng, Qizhi Xu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

Although the unique advantages of optical and synthetic aperture radar (SAR) images promote their fusion, the integration of complementary features from the two types of data and their effective fusion remains a vital problem. To address that, a novel framework is designed based on the observation that the structure of SAR images and the texture of optical images look complementary. The proposed framework, named SOSTF, is an unsupervised end-to-end fusion network that aims to integrate structural features from SAR images and detailed texture features from optical images into the fusion results. The proposed method adopts the nest connect-based architecture, including an encoder network, a fusion part, and a decoder network. To maintain the structure and texture information of input images, the encoder architecture is utilized to extract multiscale features from images. Then, we use the densely connected convolutional network (DenseNet) to perform feature fusion. Finally, we reconstruct the fusion image using a decoder network. In the training stage, we introduce a structure-texture decomposition model. In addition, a novel texture-preserving and structure-enhancing loss function are designed to train the DenseNet to enhance the structure and texture features of fusion results. Qualitative and quantitative comparisons of the fusion results with nine advanced methods demonstrate that the proposed method can fuse the complementary features of SAR and optical images more effectively.

源语言英语
文章编号4028305
期刊IEEE Geoscience and Remote Sensing Letters
19
DOI
出版状态已出版 - 2022

指纹

探究 'An Unsupervised SAR and Optical Image Fusion Network Based on Structure-Texture Decomposition' 的科研主题。它们共同构成独一无二的指纹。

引用此