Triple-adjacent-frame generative network for blind video motion deblurring

Wenlong Liu, Yuejin Zhao, Ming Liu*, Weichao Yi, Liquan Dong, Mei Hui

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

6 引用 (Scopus)

摘要

Photos and videos captured by handheld imaging devices are often accompanied by unwanted blur because of hand jitters and fast movement of objects during the exposure time. Most previous studies discussed single image deblurring and video deblurring but neglected detailed analyses of the spatiotemporal continuity between adjacent frames, which limits the deblurring effect. We propose a novel end-to-end blind video motion deblurring network that takes triple adjacent frames as input to deblur a blurry video frame. In our approach, a bidirectional temporal feature transfer between triple adjacent frames is implemented to pass the latent features of the central frame on to a group encoder of its neighbors. Then, a hybrid decoder decodes group features and estimates a sharper video frame relative to the central frame. Experimental results show that our model outperforms previous excellent methods in terms of traditional metrics (PSNR and SSIM) and visual quality within an acceptable time cost. The code is available at https://github.com/BITLIULONGEE/Triple-Adjacent-Frame-Generative-Network.

源语言英语
页(从-至)153-165
页数13
期刊Neurocomputing
376
DOI
出版状态已出版 - 1 2月 2020

指纹

探究 'Triple-adjacent-frame generative network for blind video motion deblurring' 的科研主题。它们共同构成独一无二的指纹。

引用此