TY - JOUR
T1 - An Unsupervised SAR and Optical Image Fusion Network Based on Structure-Texture Decomposition
AU - Ye, Yuanxin
AU - Liu, Wanchun
AU - Zhou, Liang
AU - Peng, Tao
AU - Xu, Qizhi
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - Although the unique advantages of optical and synthetic aperture radar (SAR) images promote their fusion, the integration of complementary features from the two types of data and their effective fusion remains a vital problem. To address that, a novel framework is designed based on the observation that the structure of SAR images and the texture of optical images look complementary. The proposed framework, named SOSTF, is an unsupervised end-to-end fusion network that aims to integrate structural features from SAR images and detailed texture features from optical images into the fusion results. The proposed method adopts the nest connect-based architecture, including an encoder network, a fusion part, and a decoder network. To maintain the structure and texture information of input images, the encoder architecture is utilized to extract multiscale features from images. Then, we use the densely connected convolutional network (DenseNet) to perform feature fusion. Finally, we reconstruct the fusion image using a decoder network. In the training stage, we introduce a structure-texture decomposition model. In addition, a novel texture-preserving and structure-enhancing loss function are designed to train the DenseNet to enhance the structure and texture features of fusion results. Qualitative and quantitative comparisons of the fusion results with nine advanced methods demonstrate that the proposed method can fuse the complementary features of SAR and optical images more effectively.
AB - Although the unique advantages of optical and synthetic aperture radar (SAR) images promote their fusion, the integration of complementary features from the two types of data and their effective fusion remains a vital problem. To address that, a novel framework is designed based on the observation that the structure of SAR images and the texture of optical images look complementary. The proposed framework, named SOSTF, is an unsupervised end-to-end fusion network that aims to integrate structural features from SAR images and detailed texture features from optical images into the fusion results. The proposed method adopts the nest connect-based architecture, including an encoder network, a fusion part, and a decoder network. To maintain the structure and texture information of input images, the encoder architecture is utilized to extract multiscale features from images. Then, we use the densely connected convolutional network (DenseNet) to perform feature fusion. Finally, we reconstruct the fusion image using a decoder network. In the training stage, we introduce a structure-texture decomposition model. In addition, a novel texture-preserving and structure-enhancing loss function are designed to train the DenseNet to enhance the structure and texture features of fusion results. Qualitative and quantitative comparisons of the fusion results with nine advanced methods demonstrate that the proposed method can fuse the complementary features of SAR and optical images more effectively.
KW - Image fusion
KW - SOSTF
KW - synthetic aperture radar (SAR) and optical images
KW - unsupervised
UR - http://www.scopus.com/inward/record.url?scp=85141564069&partnerID=8YFLogxK
U2 - 10.1109/LGRS.2022.3219341
DO - 10.1109/LGRS.2022.3219341
M3 - Article
AN - SCOPUS:85141564069
SN - 1545-598X
VL - 19
JO - IEEE Geoscience and Remote Sensing Letters
JF - IEEE Geoscience and Remote Sensing Letters
M1 - 4028305
ER -