摘要
Photos and videos captured by handheld imaging devices are often accompanied by unwanted blur because of hand jitters and fast movement of objects during the exposure time. Most previous studies discussed single image deblurring and video deblurring but neglected detailed analyses of the spatiotemporal continuity between adjacent frames, which limits the deblurring effect. We propose a novel end-to-end blind video motion deblurring network that takes triple adjacent frames as input to deblur a blurry video frame. In our approach, a bidirectional temporal feature transfer between triple adjacent frames is implemented to pass the latent features of the central frame on to a group encoder of its neighbors. Then, a hybrid decoder decodes group features and estimates a sharper video frame relative to the central frame. Experimental results show that our model outperforms previous excellent methods in terms of traditional metrics (PSNR and SSIM) and visual quality within an acceptable time cost. The code is available at https://github.com/BITLIULONGEE/Triple-Adjacent-Frame-Generative-Network.
源语言 | 英语 |
---|---|
页(从-至) | 153-165 |
页数 | 13 |
期刊 | Neurocomputing |
卷 | 376 |
DOI | |
出版状态 | 已出版 - 1 2月 2020 |