Abstract
This work suggests a multi-scale real-time image fusion model based on nest connection, which aims to address the issues of lengthy running times, unnatural fusion strategies, and the incapacity to extract multi-scale deep features in existing infrared and visible image fusion approaches. Firstly, multi-scale deep features are extracted by a feature extractor. Then, feature maps with multi-scale deep features are generated from the fusion network. Finally, the fusion image is reconstructed by an image re-constructor. In comparison to other algorithms tested on common datasets, a subjective qualitative comparison shows that the algorithm in this paper can contain sharp image edges while maintaining image intensity and it has a better fusion effect under complex conditions such as overexposure, target occlusion, and detail blurring. In objective quantitative metrics comparison, 5 best values and 2 second best values have been obtained on 9 evaluation metrics of 4 categories which are based on information theory, based on image feature, based on image structure similarity and human perception. Additionally, the other two metrics results have also showed good performance. Moreover, there has been a noticeable reduction in fusion time. The model presented in this paper has a high practicability and may successfully overcome the current issues with picture fusion techniques through experimental verification.
| Translated title of the contribution | Multi-scale infrared and visible image fusion based on nest connection |
|---|---|
| Original language | Chinese (Traditional) |
| Pages (from-to) | 683-691 |
| Number of pages | 9 |
| Journal | Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics |
| Volume | 51 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - Feb 2025 |