Abstract | ||
---|---|---|
Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process. For event-based cameras, however, fast motion can be captured as events at high time rate, raising new opportunities to exploring effective solutions. In this paper, we start from a sequential formulation of event-based motion deblurring, then show how its optimization can be unfolded with a novel end-to-end deep architecture. The proposed architecture is a convolutional recurrent neural network that integrates visual and temporal knowledge of both global and local scales in principled manner. To further improve the reconstruction, we propose a differentiable directional event filtering module to effectively extract rich boundary prior from the stream of events. We conduct extensive experiments on the synthetic GoPro dataset and a large newly introduced dataset captured by a DAVIS240C camera. The proposed approach achieves state-of-the-art reconstruction quality, and generalizes better to handling real-world motion blur. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/CVPR42600.2020.00338 | 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) |
DocType | ISSN | Citations |
Conference | 1063-6919 | 0 |
PageRank | References | Authors |
0.34 | 22 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhe Jiang | 1 | 26 | 9.94 |
Yu Zhang | 2 | 1 | 1.03 |
Dongqing Zou | 3 | 2 | 2.05 |
Jimmy S. J. Ren | 4 | 324 | 23.85 |
Jian Cheng Lv | 5 | 337 | 54.52 |
Yebin Liu | 6 | 688 | 49.05 |