摘要
For the problem of dynamic scene deblurring, a dual learning generative adversarial network (DLGAN) is proposed in this paper. The network can use unpaired blurry and sharp images to perform image deblurring calculations in the training mode of dual learning, which no longer requires the training image set to be a pair of blurry and their corresponding sharp images. The DLGAN uses the duality between the deblurring task and the reblurring task to establish a feedback signal, and uses this signal to constrain the deblurring task and the reblurring task to learn and update each other from two different directions until convergence. Experimental results show that the DLGAN has a better performance compared to nine image deblurring methods trained with paired datasets in structural similarity and visualization evaluation.
投稿的翻译标题 | Dual learning generative adversarial network for dynamic scene deblurring |
---|---|
源语言 | 繁体中文 |
页(从-至) | 1305-1314 |
页数 | 10 |
期刊 | Kongzhi yu Juece/Control and Decision |
卷 | 39 |
期 | 4 |
DOI | |
出版状态 | 已出版 - 4月 2024 |
关键词
- attention-guided
- dual learning
- dynamic scene deblurring
- feature map loss function
- generative adversarial network