TY - GEN
T1 - SAR Image Despeckling via Efficient Multi-Scale Attention Enhanced U-Net
AU - Guo, Zhenyu
AU - Hu, Weidong
AU - Peng, Jincheng
AU - Hu, Guozhen
AU - Feng, Minghao
AU - Zhou, Ming
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Synthetic Aperture Radar (SAR) images are crucial for remote sensing and target recognition due to their all-weather, all-time imaging capabilities. However, speckle noise during imaging degrades image quality and affects high-level visual tasks. Traditional denoising methods (e.g., Lee, Frost, SAR-BM3D) struggle to balance noise suppression and structural detail preservation, while deep learning approaches (e.g., U-Net, Dncnn) face challenges in multi-scale feature fusion and attention design, causing computational redundancy and information loss. To address this, we propose a Multi-scale Efficient Attention U-Net (EMA-U-Net), utilizing parallel convolution kernels for multi-scale feature extraction and integrating cross-spatial learning with channel reshaping to enhance feature representation and structural preservation. Experiments show that EMA-U-Net outperforms state-of-the-art baselines in PSNR and SSIM, achieving both efficiency and accuracy, demonstrating the potential of multi-scale efficient attention for SAR image denoising.
AB - Synthetic Aperture Radar (SAR) images are crucial for remote sensing and target recognition due to their all-weather, all-time imaging capabilities. However, speckle noise during imaging degrades image quality and affects high-level visual tasks. Traditional denoising methods (e.g., Lee, Frost, SAR-BM3D) struggle to balance noise suppression and structural detail preservation, while deep learning approaches (e.g., U-Net, Dncnn) face challenges in multi-scale feature fusion and attention design, causing computational redundancy and information loss. To address this, we propose a Multi-scale Efficient Attention U-Net (EMA-U-Net), utilizing parallel convolution kernels for multi-scale feature extraction and integrating cross-spatial learning with channel reshaping to enhance feature representation and structural preservation. Experiments show that EMA-U-Net outperforms state-of-the-art baselines in PSNR and SSIM, achieving both efficiency and accuracy, demonstrating the potential of multi-scale efficient attention for SAR image denoising.
KW - EMA
KW - SAR
KW - U-Net
KW - attention mechanism
KW - deep learning
KW - speckle noise reduction
UR - https://www.scopus.com/pages/publications/105013049536
U2 - 10.1109/CVIDL65390.2025.11085729
DO - 10.1109/CVIDL65390.2025.11085729
M3 - Conference contribution
AN - SCOPUS:105013049536
T3 - 2025 6th International Conference on Computer Vision, Image and Deep Learning, CVIDL 2025
SP - 473
EP - 477
BT - 2025 6th International Conference on Computer Vision, Image and Deep Learning, CVIDL 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th International Conference on Computer Vision, Image and Deep Learning, CVIDL 2025
Y2 - 23 May 2025 through 25 May 2025
ER -