Abstract | ||
---|---|---|
This paper presents the first end-to-end network for exemplar-based video colorization. The main challenge is to achieve temporal consistency while remainingfaithful to the reference style. To address this issue, we introduce a recurrentframework that unifies the semantic correspondence and color propagationsteps. Both steps allow a provided reference image to guide the colorization of every frame, thus reducing accumulatedpropagationerrors. Video frames are colorized in sequence based on the colorization history, and its coherency is further enforced by the temporal consistency loss. All of these components, learnt end-to-end, help produce realistic videos with good temporal stability. Experiments show our result is superior to the state-of-the-artmethods both quantitativelyand qualitatively. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CVPR.2019.00824 | 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) |
DocType | Volume | ISSN |
Conference | abs/1906.09909 | 1063-6919 |
Citations | PageRank | References |
4 | 0.39 | 0 |
Authors | ||
7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bo Zhang | 1 | 22 | 5.68 |
Mingming He | 2 | 34 | 2.91 |
Jing Liao | 3 | 182 | 25.81 |
Pedro V. Sander | 4 | 1111 | 63.92 |
Lu Yuan | 5 | 801 | 48.29 |
Amine Bermak | 6 | 493 | 90.25 |
Dong Chen | 7 | 681 | 32.51 |