跳到主要导航 跳到搜索 跳到主要内容

Low-Light Raw Video Denoising With a High-Quality Realistic Motion Dataset

  • Beijing Institute of Technology

科研成果: 期刊稿件文章同行评审

摘要

Recently, supervised deep-learning methods have shown their effectiveness on raw video denoising in low-light. However, existing training datasets have specific drawbacks, e.g., inaccurate noise modeling in synthetic datasets, simple motion created by hand or fixed motion, and limited-quality ground truth caused by the beam splitter in real captured datasets. These defects significantly decline the performance of network when tackling real low-light video sequences, where noise distribution and motion patterns are extremely complex. In this paper, we collect a raw video denoising dataset in low-light with complex motion and high-quality ground truth, overcoming the drawbacks of previous datasets. Specifically, we capture 210 paired videos, each containing short/long exposure pairs of real video frames with dynamic objects and diverse scenes displayed on a high-end monitor. Besides, since spatial self-similarity has been extensively utilized in image tasks, harnessing this property for network design is more crucial for video denoising as temporal redundancy. To effectively exploit the intrinsic temporal-spatial self-similarity of complex motion in real videos, we propose a new Transformer-based network, which can effectively combine the locality of convolution with the long-range modeling ability of 3D temporal-spatial self-attention. Extensive experiments verify the value of our dataset and the effectiveness of our method on various metrics.

源语言英语
页(从-至)8119-8131
页数13
期刊IEEE Transactions on Multimedia
25
DOI
出版状态已出版 - 2023

指纹

探究 'Low-Light Raw Video Denoising With a High-Quality Realistic Motion Dataset' 的科研主题。它们共同构成独一无二的指纹。

引用此