TY - JOUR
T1 - A brain-inspired approach for SAR-to-optical image translation based on diffusion models
AU - Shi, Hao
AU - Cui, Zihan
AU - Chen, Liang
AU - He, Jingfei
AU - Yang, Jingyi
N1 - Publisher Copyright:
Copyright © 2024 Shi, Cui, Chen, He and Yang.
PY - 2024
Y1 - 2024
N2 - Synthetic Aperture Radar (SAR) plays a crucial role in all-weather and all-day Earth observation owing to its distinctive imaging mechanism. However, interpreting SAR images is not as intuitive as optical images. Therefore, to make SAR images consistent with human cognitive habits and assist inexperienced people in interpreting SAR images, a generative model is needed to realize the translation from SAR images to optical ones. In this work, inspired by the processing of the human brain in painting, a novel conditional image-to-image translation framework is proposed for SAR to optical image translation based on the diffusion model. Firstly, considering the limited performance of existing CNN-based feature extraction modules, the model draws insights from the self-attention and long-skip connection mechanisms to enhance feature extraction capabilities, which are aligned more closely with the memory paradigm observed in the functioning of human brain neurons. Secondly, addressing the scarcity of SAR-optical image pairs, data augmentation that does not leak the augmented mode into the generated mode is designed to optimize data efficiency. The proposed SAR-to-optical image translation method is thoroughly evaluated using the SAR2Opt dataset. Experimental results demonstrate its capacity to synthesize high-fidelity optical images without introducing blurriness.
AB - Synthetic Aperture Radar (SAR) plays a crucial role in all-weather and all-day Earth observation owing to its distinctive imaging mechanism. However, interpreting SAR images is not as intuitive as optical images. Therefore, to make SAR images consistent with human cognitive habits and assist inexperienced people in interpreting SAR images, a generative model is needed to realize the translation from SAR images to optical ones. In this work, inspired by the processing of the human brain in painting, a novel conditional image-to-image translation framework is proposed for SAR to optical image translation based on the diffusion model. Firstly, considering the limited performance of existing CNN-based feature extraction modules, the model draws insights from the self-attention and long-skip connection mechanisms to enhance feature extraction capabilities, which are aligned more closely with the memory paradigm observed in the functioning of human brain neurons. Secondly, addressing the scarcity of SAR-optical image pairs, data augmentation that does not leak the augmented mode into the generated mode is designed to optimize data efficiency. The proposed SAR-to-optical image translation method is thoroughly evaluated using the SAR2Opt dataset. Experimental results demonstrate its capacity to synthesize high-fidelity optical images without introducing blurriness.
KW - SAR-to-optical image translation
KW - brain-inspired approach
KW - cognitive processes
KW - diffusion model
KW - synthetic aperture radar
UR - http://www.scopus.com/inward/record.url?scp=85184683473&partnerID=8YFLogxK
U2 - 10.3389/fnins.2024.1352841
DO - 10.3389/fnins.2024.1352841
M3 - Article
AN - SCOPUS:85184683473
SN - 1662-4548
VL - 18
JO - Frontiers in Neuroscience
JF - Frontiers in Neuroscience
M1 - 1352841
ER -