Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined with CNN

Lei Yan, Jie Cao*, Saad Rizvi, Kaiyu Zhang, Qun Hao, Xuemin Cheng

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

18 引用 (Scopus)

摘要

Convolutional neural networks (CNN) with their deep feature extraction capability have recently been applied in numerous image fusion tasks. However, the image fusion of infrared and visible images leads to loss of fine details and degradation of contrast in the fused image. This deterioration in the image is associated with the conventional 'averaging' rule for base layer fusion and relatively large feature extraction by CNN. To overcome these problems, an effective fusion framework based on visual saliency weight map (VSWM) combined with CNN is proposed. The proposed framework first employs VSWM method to improve the contrast of an image under consideration. Next, the fine details in the image are preserved by applying multi-resolution singular value decomposition (MSVD) before further processing by CNN. The promising experimental results show that the proposed method outperforms state-of-the-art methods by scoring the highest over different evaluation metrics such as Q0, multiscale structural similarity (MS_SSIM), and the sum of correlations of differences (SCD).

源语言英语
文章编号9044861
页(从-至)59976-59986
页数11
期刊IEEE Access
8
DOI
出版状态已出版 - 2020

指纹

探究 'Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined with CNN' 的科研主题。它们共同构成独一无二的指纹。

引用此