Abstract
Convolutional neural networks (CNN) with their deep feature extraction capability have recently been applied in numerous image fusion tasks. However, the image fusion of infrared and visible images leads to loss of fine details and degradation of contrast in the fused image. This deterioration in the image is associated with the conventional 'averaging' rule for base layer fusion and relatively large feature extraction by CNN. To overcome these problems, an effective fusion framework based on visual saliency weight map (VSWM) combined with CNN is proposed. The proposed framework first employs VSWM method to improve the contrast of an image under consideration. Next, the fine details in the image are preserved by applying multi-resolution singular value decomposition (MSVD) before further processing by CNN. The promising experimental results show that the proposed method outperforms state-of-the-art methods by scoring the highest over different evaluation metrics such as Q0, multiscale structural similarity (MS_SSIM), and the sum of correlations of differences (SCD).
Original language | English |
---|---|
Article number | 9044861 |
Pages (from-to) | 59976-59986 |
Number of pages | 11 |
Journal | IEEE Access |
Volume | 8 |
DOIs | |
Publication status | Published - 2020 |
Keywords
- Convolutional neural network
- image fusion
- multi-resolution singular value decomposition
- visual saliency weight map