Multi-scale convolutional neural networks and saliency weight maps for infrared and visible image fusion

Chenxuan Yang, Yunan He, Ce Sun, Bingkun Chen, Jie Cao*, Yongtian Wang, Qun Hao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Image fusion is the fusion of multiple images from the same scene to produce a more informative image, and infrared and visible image fusion is an important branch of image fusion. To tackle the issues of diminished luminosity in the infrared target, inconspicuous target features, and blurred texture of the fused image after the fusion of infrared and visible images. This paper introduces a novel effective fusion framework that merges multi-scale Convolutional Neural Networks (CNN) with saliency weight maps. First, the method measures the source image features to estimate the initial saliency weight map. Then, the initial weight map is segmented and optimized using a guided filter before being further processed by CNN. Next, a trained Siamese convolutional network is used to solve the two key problems of activity measure and weight assignment. Meanwhile, a multi-layer fusion strategy is designed to effectively retain the luminance of the infrared target and the texture information in the visible background. Finally, adaptive adjustment of the fusion coefficients is achieved by employing saliency. The experimental results show that the method outperforms the state-of-the-art algorithms in terms of both subjective visual quality and objective evaluation effects.

Original languageEnglish
Article number104015
JournalJournal of Visual Communication and Image Representation
Volume98
DOIs
Publication statusPublished - Feb 2024

Keywords

  • CNN
  • Guided filter
  • Image fusion
  • Infrared and visible images
  • Saliency
  • Weight assignment

Fingerprint

Dive into the research topics of 'Multi-scale convolutional neural networks and saliency weight maps for infrared and visible image fusion'. Together they form a unique fingerprint.

Cite this