Event-guided Video Clip Generation from Blurry Images

Xin Ding, Tsuyoshi Takatani, Zhongyuan Wang, Ying Fu, Yinqiang Zheng*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

2 引用 (Scopus)

摘要

Dynamic and active pixel vision sensors (DAVIS) can simultaneously produce streams of asynchronous events captured by the dynamic vision sensor (DVS) and intensity frames from the active pixel sensor (APS). Event sequences show high temporal resolution and high dynamic range, while intensity images easily suffer from motion blur due to the low frame rate of APS. In this paper, we present an end-to-end convolutional neural network based method under the local and global constraints of events to restore clear, sharp intensity frames through collaborative learning from a blurry image and its associated event streams. Specifically, we first learn a function of the relationship between the sharp intensity frame and the corresponding blurry image with its event data. Then we propose a generation module to realize it with a supervision module to constrain the restoration in the motion process. We also capture the first realistic dataset with paired blurry frame/events and sharp frames by synchronizing a DAVIS camera and a high-speed camera. Experimental results show that our method can reconstruct high-quality sharp video clips, and outperform the state-of-the-art on both simulated and real-world data.

源语言英语
主期刊名MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
出版商Association for Computing Machinery, Inc
2672-2680
页数9
ISBN(电子版)9781450392037
DOI
出版状态已出版 - 10 10月 2022
活动30th ACM International Conference on Multimedia, MM 2022 - Lisboa, 葡萄牙
期限: 10 10月 202214 10月 2022

出版系列

姓名MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia

会议

会议30th ACM International Conference on Multimedia, MM 2022
国家/地区葡萄牙
Lisboa
时期10/10/2214/10/22

指纹

探究 'Event-guided Video Clip Generation from Blurry Images' 的科研主题。它们共同构成独一无二的指纹。

引用此