A multi-autoencoder fusion network guided by perceptual distillation

Xingwang Liu, Kaoru Hirota, Zhiyang Jia, Yaping Dai*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

6 引用 (Scopus)

摘要

In this study, a novel distillation paradigm named perceptual distillation is proposed to guide the training of image fusion networks without ground truths. In the paradigm, the student network which we called main autoencoder takes in source images and produces a fused image, and the teacher network is a well-trained network exploited to compute teacher representations of images. Knowledge in the teacher representations of source images is distilled and transferred to our student main autoencoder with the help of the perceptual saliency scheme. The scheme also derives a pixel level scheme of pixel compensation, which combines with source image to enhance the pixel intensity of the fused image. Moreover, a multi-autoencoder architecture is developed by assembling two auxiliary decoders behind the main autoencoder. The architecture is trained with self-supervision to consolidate fusion training against the limitation of teacher network. Qualitative and quantitative experiments demonstrate that the proposed network achieves the state-of-the-art performance on multi-source image fusion compared with the existing fusion methods.

源语言英语
页(从-至)1-20
页数20
期刊Information Sciences
606
DOI
出版状态已出版 - 8月 2022

指纹

探究 'A multi-autoencoder fusion network guided by perceptual distillation' 的科研主题。它们共同构成独一无二的指纹。

引用此