Title
Learning an Occlusion-Aware Network for Video Deblurring
Abstract
Video deblurring is a challenging task since the blur is caused by camera shake, object motions, etc. The success of the state-of-the-art methods stems mainly from exploiting the temporal information of neighboring frames through alignment. When there exists occlusion among the sequence, these approaches become less effective for inaccurate alignment. In this paper, we propose an effective occlusion-aware network to handle the occlusion for video deblurring. The proposed module first generates a coarse pixel-wise alignment filter to explore the temporal information and then learns an adaptive affine transformation to deal with the occluded areas. In addition, a self-attention mechanism is developed to better model the occluded pixels. To further improve the performance, we progress a multi-scale strategy and train the network in an end-to-end manner. Both quantitative and qualitative experimental results show that the proposed method achieves favorable performance against state-of-the-art methods on the benchmark datasets. The code and trained models are available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/XQLuck/code.git</uri>
Year
DOI
Venue
2022
10.1109/TCSVT.2021.3132102
IEEE Transactions on Circuits and Systems for Video Technology
Keywords
DocType
Volume
Video deblurring,convolutional neural network,spatial variant filter,occlusion
Journal
32
Issue
ISSN
Citations 
7
1051-8215
0
PageRank 
References 
Authors
0.34
23
3
Name
Order
Citations
PageRank
Q. Xu1338.42
Jin-shan Pan256730.84
Yuntao Qian359754.17