A multi-autoencoder fusion network guided by perceptual distillation

Xingwang Liu, Kaoru Hirota, Zhiyang Jia, Yaping Dai*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

In this study, a novel distillation paradigm named perceptual distillation is proposed to guide the training of image fusion networks without ground truths. In the paradigm, the student network which we called main autoencoder takes in source images and produces a fused image, and the teacher network is a well-trained network exploited to compute teacher representations of images. Knowledge in the teacher representations of source images is distilled and transferred to our student main autoencoder with the help of the perceptual saliency scheme. The scheme also derives a pixel level scheme of pixel compensation, which combines with source image to enhance the pixel intensity of the fused image. Moreover, a multi-autoencoder architecture is developed by assembling two auxiliary decoders behind the main autoencoder. The architecture is trained with self-supervision to consolidate fusion training against the limitation of teacher network. Qualitative and quantitative experiments demonstrate that the proposed network achieves the state-of-the-art performance on multi-source image fusion compared with the existing fusion methods.

Original languageEnglish
Pages (from-to)1-20
Number of pages20
JournalInformation Sciences
Volume606
DOIs
Publication statusPublished - Aug 2022

Keywords

  • Image fusion
  • Knowledge distillation
  • Perceptual loss
  • Unsupervised learning

Fingerprint

Dive into the research topics of 'A multi-autoencoder fusion network guided by perceptual distillation'. Together they form a unique fingerprint.

Cite this