Title
An Internal Learning Approach To Video Inpainting
Abstract
We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images. In extending DIP to video we make two important contributions. First, we show that coherent video inpainting is possible without a priori training. We take a generative approach to inpainting based on internal (within-video) learning without reliance upon an external corpus of visual data to train a one-size-fits-all model for the large space of general videos. Second, we show that such a framework can jointly generate both appearance and flow, whilst exploiting these complementary modalities to ensure mutual consistency. We show that leveraging appearance statistics specific to each video achieves visually plausible results whilst handling the challenging problem of long-term consistency.
Year
DOI
Venue
2019
10.1109/ICCV.2019.00281
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)
DocType
Volume
Issue
Conference
2019
1
ISSN
Citations 
PageRank 
1550-5499
0
0.34
References 
Authors
6
6
Name
Order
Citations
PageRank
Hao-Tian Zhang1232.43
Long Mai222014.63
Hailin Jin31937108.60
Zhaowen Wang4106340.64
Ning Xu58810.99
John P. Collomosse673450.47