面向动态场景去模糊的对偶学习生成对抗网络

Ye Ji, Ya Ping Dai*, Kaoru Hirota, Shuai Shao

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

For the problem of dynamic scene deblurring, a dual learning generative adversarial network (DLGAN) is proposed in this paper. The network can use unpaired blurry and sharp images to perform image deblurring calculations in the training mode of dual learning, which no longer requires the training image set to be a pair of blurry and their corresponding sharp images. The DLGAN uses the duality between the deblurring task and the reblurring task to establish a feedback signal, and uses this signal to constrain the deblurring task and the reblurring task to learn and update each other from two different directions until convergence. Experimental results show that the DLGAN has a better performance compared to nine image deblurring methods trained with paired datasets in structural similarity and visualization evaluation.

投稿的翻译标题Dual learning generative adversarial network for dynamic scene deblurring
源语言繁体中文
页(从-至)1305-1314
页数10
期刊Kongzhi yu Juece/Control and Decision
39
4
DOI
出版状态已出版 - 4月 2024

关键词

  • attention-guided
  • dual learning
  • dynamic scene deblurring
  • feature map loss function
  • generative adversarial network

指纹

探究 '面向动态场景去模糊的对偶学习生成对抗网络' 的科研主题。它们共同构成独一无二的指纹。

引用此