Multi-scene image fusion via memory aware synapses

Bo Meng*, Huaizhou Liu, Zegang Ding

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Current image fusion methods are primarily designed for infrared and visible images under normal illumination conditions. In multi-scene environments, where conditions such as varying lighting, weather, or scene characteristics are present, existing methods face challenges, particularly under low-light conditions. Due to the severe degradation of visible images in such scenarios, these methods struggle with insufficient feature representation, poor fusion quality, and knowledge loss during task switching. In order to solve the above problems, this paper proposes a multi-scene image fusion method based on multi-scale adaptive fusion and continuous learning (MMF-Fusion). By establishing a multi-scale adaptive fusion network to continuously learn image features in multiple scenes, the quality of image fusion is improved. The specific methods are as follows: (1) A hybrid structure combining CNN and Transformer method is proposed to enhance the feature expression ability in different scenes by fusing the local features of CNN and the global features of Transformer; (2) A new FFM structure is proposed to fuse multi-scale and multi-scene features, which makes full use of scene information at different scales to improve the fusion quality. (3) Memory Aware Synapses (MAS) continuous learning method is used to train the model and calculate the loss function, which retains the visible light features of the fused images in different scenes, effectively reducing the damage of color information in dark light and mitigating the problem of knowledge loss in the process of task switching. Extensive experimental results show that MMF-Fusion is superior to the state-of-the-art algorithms in terms of visual quality and quantitative evaluation. In particular, multimodal fusion and low-illumination enhancement provide more effective information for fused images and facilitate high-level vision tasks.

Original languageEnglish
Article number14280
JournalScientific Reports
Volume15
Issue number1
DOIs
Publication statusPublished - Dec 2025
Externally publishedYes

Keywords

  • Continuous learning
  • Image fusion
  • Multi-modal image
  • Multi-scale feature
  • Multi-scene image fusion

Cite this