Triple-adjacent-frame generative network for blind video motion deblurring

Wenlong Liu, Yuejin Zhao, Ming Liu*, Weichao Yi, Liquan Dong, Mei Hui

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Photos and videos captured by handheld imaging devices are often accompanied by unwanted blur because of hand jitters and fast movement of objects during the exposure time. Most previous studies discussed single image deblurring and video deblurring but neglected detailed analyses of the spatiotemporal continuity between adjacent frames, which limits the deblurring effect. We propose a novel end-to-end blind video motion deblurring network that takes triple adjacent frames as input to deblur a blurry video frame. In our approach, a bidirectional temporal feature transfer between triple adjacent frames is implemented to pass the latent features of the central frame on to a group encoder of its neighbors. Then, a hybrid decoder decodes group features and estimates a sharper video frame relative to the central frame. Experimental results show that our model outperforms previous excellent methods in terms of traditional metrics (PSNR and SSIM) and visual quality within an acceptable time cost. The code is available at https://github.com/BITLIULONGEE/Triple-Adjacent-Frame-Generative-Network.

Original languageEnglish
Pages (from-to)153-165
Number of pages13
JournalNeurocomputing
Volume376
DOIs
Publication statusPublished - 1 Feb 2020

Keywords

  • Blind motion deblurring
  • Group encoder
  • Hybrid decoder
  • Temporal feature transfer
  • Triple adjacent frames

Fingerprint

Dive into the research topics of 'Triple-adjacent-frame generative network for blind video motion deblurring'. Together they form a unique fingerprint.

Cite this