Unsupervised Infrared and Visible Image Fusion with Pixel Self-attention

Saijia Cui, Zhiqiang Zhou*, Linhao Li, Erfang Fei

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

In this paper, we propose a convolutional neural network (CNN) based unsupervised infrared and visible image fusion method. The proposed method optimizes both network structure and loss functions to obtain better fused images. Specifically, an effective pixel self-attention module is applied to emphasize the importance of different pixel locations of the feature map, which enables the network to better integrate the salient information in infrared images and the detail information in visible images. As to the loss function, we adopt the perceptual loss and texture loss to preserve the detail information as well as improve the visual perception of the fused image. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

Original languageEnglish
Title of host publicationProceedings of the 33rd Chinese Control and Decision Conference, CCDC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages437-441
Number of pages5
ISBN (Electronic)9781665440899
DOIs
Publication statusPublished - 2021
Event33rd Chinese Control and Decision Conference, CCDC 2021 - Kunming, China
Duration: 22 May 202124 May 2021

Publication series

NameProceedings of the 33rd Chinese Control and Decision Conference, CCDC 2021

Conference

Conference33rd Chinese Control and Decision Conference, CCDC 2021
Country/TerritoryChina
CityKunming
Period22/05/2124/05/21

Keywords

  • Convolutional neural network
  • Image fusion
  • Self-attention

Fingerprint

Dive into the research topics of 'Unsupervised Infrared and Visible Image Fusion with Pixel Self-attention'. Together they form a unique fingerprint.

Cite this