Title
Video Deblurring via Motion Compensation and Adaptive Information Fusion
Abstract
Non-uniform motion blur caused by camera shake or object motion is a common artifact in videos captured by hand-held devices. Recent advances in video deblurring have shown that convolutional neural networks (CNNs) are able to aggregate information from multiple unaligned consecutive frames to generate sharper images. However, without explicit image alignment, most of the existing CNN-based methods often introduce temporal artifacts, especially when the input frames are severely blurred. To this end, we propose a novel video deblurring method to handle spatially varying blur in dynamic scenes. In particular, we introduce a motion estimation and motion compensation module which estimates the optical flow from the blurry images and then warps the previously deblurred frame to restore the current frame. Thus, the previous processing results benefit the restoration of the subsequent frames. This recurrent scheme is able to utilize contextual information efficiently and can facilitate the temporal coherence of the results. Furthermore, to suppress the negative effect of alignment error, we propose an adaptive information fusion module that can filter the temporal information adaptively. The experimental results obtained in this study confirm that the proposed method is both effective and efficient.
Year
DOI
Venue
2019
10.1016/j.neucom.2019.03.009
Neurocomputing
Keywords
Field
DocType
Video deblurring,Motion blur,Optical flow,Motion compensation
Shake,Pattern recognition,Deblurring,Convolutional neural network,Motion compensation,Motion blur,Coherence (physics),Artificial intelligence,Motion estimation,Optical flow,Mathematics
Journal
Volume
ISSN
Citations 
341
0925-2312
1
PageRank 
References 
Authors
0.37
27
4
Name
Order
Citations
PageRank
Zongqian Zhan120.73
Xue Yang21510.21
Yihui Li310.37
Chao Pang410.37