Combining optical flow and Swin Transformer for Space-Time video super-resolution

Xin Wang, Hua Wang, Mingli Zhang, Fan Zhang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Space–time video super-resolution is a task that aims to interpolate low frame rate, low resolution videos to high frame rate, high resolution ones. While existing Transformer-based methods have achieved results comparable to convolutional neural networks-based methods, the computational cost of Transformer limits its performance with constrained computational resources. Moreover, Swin Transformer may fail to fully exploit the spatio-temporal information of video frames due to the limitation of window size, impeding its effectiveness in handling large motions. To address these limitations, we propose an end-to-end space–time video super-resolution architecture based on optical flow alignment and Swin Transformer. The alignment module is introduced to extract spatio-temporal information from adjacent frames without significantly increasing the computational burden. Additionally, we design a residual convolution layer to enhance the translational invariance of the features extracted by Swin Transformer and introduces additional nonlinear transformations. Experimental results demonstrate that our proposed method achieves superior performance on various benchmark datasets compared to state-of-the-art methods. In terms of Peak Signal-to-Noise Ratio, our method outperforms the state-of-the-art methods by at least 0.15 dB on Vimeo-Medium dataset.

源语言英语
文章编号109227
期刊Engineering Applications of Artificial Intelligence
137
DOI
出版状态已出版 - 11月 2024
已对外发布

指纹

探究 'Combining optical flow and Swin Transformer for Space-Time video super-resolution' 的科研主题。它们共同构成独一无二的指纹。

引用此