面向动态场景去模糊的对偶学习生成对抗网络

Translated title of the contribution: Dual learning generative adversarial network for dynamic scene deblurring

Ye Ji, Ya Ping Dai*, Kaoru Hirota, Shuai Shao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

For the problem of dynamic scene deblurring, a dual learning generative adversarial network (DLGAN) is proposed in this paper. The network can use unpaired blurry and sharp images to perform image deblurring calculations in the training mode of dual learning, which no longer requires the training image set to be a pair of blurry and their corresponding sharp images. The DLGAN uses the duality between the deblurring task and the reblurring task to establish a feedback signal, and uses this signal to constrain the deblurring task and the reblurring task to learn and update each other from two different directions until convergence. Experimental results show that the DLGAN has a better performance compared to nine image deblurring methods trained with paired datasets in structural similarity and visualization evaluation.

Translated title of the contributionDual learning generative adversarial network for dynamic scene deblurring
Original languageChinese (Traditional)
Pages (from-to)1305-1314
Number of pages10
JournalKongzhi yu Juece/Control and Decision
Volume39
Issue number4
DOIs
Publication statusPublished - Apr 2024

Fingerprint

Dive into the research topics of 'Dual learning generative adversarial network for dynamic scene deblurring'. Together they form a unique fingerprint.

Cite this