TY - JOUR
T1 - EV-Fusion
T2 - A Novel Infrared and Low-Light Color Visible Image Fusion Network Integrating Unsupervised Visible Image Enhancement
AU - Zhang, Xin
AU - Wang, Xia
AU - Yan, Changda
AU - Sun, Qiyang
N1 - Publisher Copyright:
© 2001-2012 IEEE.
PY - 2024/2/15
Y1 - 2024/2/15
N2 - Infrared and visible image fusion can effectively integrate the advantages of two source images, preserving significant target information and rich texture details. However, most existing fusion methods are only designed for well-illuminated scenes and tend to lose details when encountering low-light scenes because of the poor brightness of visible images. Some methods incorporate a light adjustment module, but they typically focus only on enhancing intensity information and neglect the enhancement of color feature, resulting in unsatisfactory visual effects in the fused images. To address this issue, this article proposes a novel method called EV-fusion, which explores the potential color and detail features in visible images and improve the visual perception of fused images. Specifically, an unsupervised image enhancement module is designed that effectively restores texture, structure, and color information in visible images by several non-reference loss functions. Then, an intensity image fusion module is devised to integrate the enhanced visible image and the infrared image. Moreover, to improve the infrared salient object feature in the fused images, we propose an infrared bilateral-guided salience map embedding into the fusion loss functions. Extensive experiments demonstrate that our method outperforms state-of-the-art (SOTA) infrared visible image fusion methods.
AB - Infrared and visible image fusion can effectively integrate the advantages of two source images, preserving significant target information and rich texture details. However, most existing fusion methods are only designed for well-illuminated scenes and tend to lose details when encountering low-light scenes because of the poor brightness of visible images. Some methods incorporate a light adjustment module, but they typically focus only on enhancing intensity information and neglect the enhancement of color feature, resulting in unsatisfactory visual effects in the fused images. To address this issue, this article proposes a novel method called EV-fusion, which explores the potential color and detail features in visible images and improve the visual perception of fused images. Specifically, an unsupervised image enhancement module is designed that effectively restores texture, structure, and color information in visible images by several non-reference loss functions. Then, an intensity image fusion module is devised to integrate the enhanced visible image and the infrared image. Moreover, to improve the infrared salient object feature in the fused images, we propose an infrared bilateral-guided salience map embedding into the fusion loss functions. Extensive experiments demonstrate that our method outperforms state-of-the-art (SOTA) infrared visible image fusion methods.
KW - Image fusion
KW - infrared and visible image
KW - nighttime environment
KW - visible image enhancement
UR - http://www.scopus.com/inward/record.url?scp=85181554582&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2023.3346886
DO - 10.1109/JSEN.2023.3346886
M3 - Article
AN - SCOPUS:85181554582
SN - 1530-437X
VL - 24
SP - 4920
EP - 4934
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
IS - 4
ER -