SADFusion: A multi-scale infrared and visible image fusion method based on salient-aware and domain-specific

Zhijia Yang, Kun Gao*, Yuxuan Mao, Yanzheng Zhang, Xiaodian Zhang, Zibo Hu, Junwei Wang, Hong Wang, Shuzhong Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Image fusion intends to generate an informative image with maximally possible features and details of the source images. However, the existing deep-learning fusion methods are rarely considering the discrepancy of visible and infrared (IVIF) modalities, besides losing control of balance in preserving the texture details and thermal target, owing to the identical structure and parameters of multi-modality feature extractors’ designing and weak domain guidance of network training. In this paper, a novel multi-scale fusion network based on the salient-aware and domain-specific methods is proposed, termed as SADFusion. To boost the multi-modality feature extracting performance of our method, the proposed network adopts dual encoding with short connection and multi-scale structure. Moreover, the domain-specific framework and corresponding training strategy are designed to achieve the identical encoding for different image modalities. Moreover, multi-scale attention fusion modules (MSFAF modules) are proposed to effectively fuse the extracted complementary features at every scale. Finally, we construct the specific salient-aware loss to guide the model to trade off the preserving necessary information, by utilizing salient modality features as pixel-to-pixel intensity and gradient maps. Experiments based on the public datasets demonstrate the superiority of our method over the state-of-art fusion methods, which particularly highlights the targets and retains the effective information.

源语言英语
文章编号104925
期刊Infrared Physics and Technology
135
DOI
出版状态已出版 - 12月 2023

指纹

探究 'SADFusion: A multi-scale infrared and visible image fusion method based on salient-aware and domain-specific' 的科研主题。它们共同构成独一无二的指纹。

引用此