Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined with CNN

Lei Yan, Jie Cao*, Saad Rizvi, Kaiyu Zhang, Qun Hao, Xuemin Cheng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

Convolutional neural networks (CNN) with their deep feature extraction capability have recently been applied in numerous image fusion tasks. However, the image fusion of infrared and visible images leads to loss of fine details and degradation of contrast in the fused image. This deterioration in the image is associated with the conventional 'averaging' rule for base layer fusion and relatively large feature extraction by CNN. To overcome these problems, an effective fusion framework based on visual saliency weight map (VSWM) combined with CNN is proposed. The proposed framework first employs VSWM method to improve the contrast of an image under consideration. Next, the fine details in the image are preserved by applying multi-resolution singular value decomposition (MSVD) before further processing by CNN. The promising experimental results show that the proposed method outperforms state-of-the-art methods by scoring the highest over different evaluation metrics such as Q0, multiscale structural similarity (MS_SSIM), and the sum of correlations of differences (SCD).

Original languageEnglish
Article number9044861
Pages (from-to)59976-59986
Number of pages11
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020

Keywords

  • Convolutional neural network
  • image fusion
  • multi-resolution singular value decomposition
  • visual saliency weight map

Fingerprint

Dive into the research topics of 'Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined with CNN'. Together they form a unique fingerprint.

Cite this