TY - JOUR
T1 - Leveraging a self-adaptive mean teacher model for semi-supervised multi-exposure image fusion
AU - Huang, Qianjun
AU - Wu, Guanyao
AU - Jiang, Zhiying
AU - Fan, Wei
AU - Xu, Bin
AU - Liu, Jinyuan
N1 - Publisher Copyright:
© 2024
PY - 2024/12
Y1 - 2024/12
N2 - Deep learning-based methods have recently shown remarkable advancements in multi-exposure image fusion (MEF), demonstrating significant achievements in improving the fusion quality. Despite their success, the majority of reference images in MEF are artificially generated, inevitably introducing a portion of low-quality ones. Existing methods either utilize these mixed-quality reference images for supervised learning or heavily depend on source images for unsupervised learning, making the fusion results challenging to accurately reflect real-world illumination conditions. To overcome the impact of unreliable factors in references, we propose a self-adaptive mean teacher-based semi-supervised learning framework tailored for MEF, termed SAMT-MEF. Its self-adaptiveness is reflected from two perspectives. Firstly, we establish a self-adaptive set to retain the best-ever outputs from the teacher as pseudo labels, employing a well-crafted hybrid metric for its updates. Secondly, we employ contrastive learning to assist the self-adaptive set further in alleviating overfitting to inferior pseudo labels. Our proposed method, backed by abundant empirical evidence, outperforms state-of-the-art methods quantitatively and qualitatively on both reference and non-reference datasets. Furthermore, in some scenarios, the fusion results surpass the reference images, showcasing superior performance in practical applications. Source code are publicly available at https://github.com/hqj9994ever/SAMT-MEF.
AB - Deep learning-based methods have recently shown remarkable advancements in multi-exposure image fusion (MEF), demonstrating significant achievements in improving the fusion quality. Despite their success, the majority of reference images in MEF are artificially generated, inevitably introducing a portion of low-quality ones. Existing methods either utilize these mixed-quality reference images for supervised learning or heavily depend on source images for unsupervised learning, making the fusion results challenging to accurately reflect real-world illumination conditions. To overcome the impact of unreliable factors in references, we propose a self-adaptive mean teacher-based semi-supervised learning framework tailored for MEF, termed SAMT-MEF. Its self-adaptiveness is reflected from two perspectives. Firstly, we establish a self-adaptive set to retain the best-ever outputs from the teacher as pseudo labels, employing a well-crafted hybrid metric for its updates. Secondly, we employ contrastive learning to assist the self-adaptive set further in alleviating overfitting to inferior pseudo labels. Our proposed method, backed by abundant empirical evidence, outperforms state-of-the-art methods quantitatively and qualitatively on both reference and non-reference datasets. Furthermore, in some scenarios, the fusion results surpass the reference images, showcasing superior performance in practical applications. Source code are publicly available at https://github.com/hqj9994ever/SAMT-MEF.
KW - Contrastive learning
KW - Mean teacher
KW - Multi-exposure image fusion
KW - Semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85197436559&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2024.102534
DO - 10.1016/j.inffus.2024.102534
M3 - Article
AN - SCOPUS:85197436559
SN - 1566-2535
VL - 112
JO - Information Fusion
JF - Information Fusion
M1 - 102534
ER -