Title
Flow-Guided Sparse Transformer for Video Deblurring.
Abstract
Exploiting similar and sharper scene patches in spatio-temporal neighborhoods is critical for video deblurring. However, CNN-based methods show limitations in capturing long-range dependencies and modeling non-local self-similarity. In this paper, we propose a novel framework, Flow-Guided Sparse Transformer (FGST), for video deblurring. In FGST, we customize a self-attention module, Flow-Guided Sparse Window-based Multi-head Self-Attention (FGSW-MSA). For each $query$ element on the blurry reference frame, FGSW-MSA enjoys the guidance of the estimated optical flow to globally sample spatially sparse yet highly related $key$ elements corresponding to the same scene patch in neighboring frames. Besides, we present a Recurrent Embedding (RE) mechanism to transfer information from past frames and strengthen long-range temporal dependencies. Comprehensive experiments demonstrate that our proposed FGST outperforms state-of-the-art (SOTA) methods on both DVD and GOPRO datasets and yields visually pleasant results in real video deblurring. https://github.com/linjing7/VR-Baseline
Year
Venue
DocType
2022
International Conference on Machine Learning
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
10
Name
Order
Citations
PageRank
Jing Lin101.69
Yuanhao Cai243.43
Xiaowan Hu343.09
Wang H47129.35
Youliang Yan501.35
Xueyi Zou662.49
Henghui Ding73610.78
Zhang Yulun820622.15
Radu Timofte91880118.45
Luc Van Gool10275661819.51