Abstract | ||
---|---|---|
Time-lapse videos usually contain visually appealing content but are often difficult and costly to create. In this paper, we present an end-to-end solution to synthesize a time-lapse video from a single outdoor image using deep neural networks. Our key idea is to train a conditional generative adversarial network based on existing datasets of time-lapse videos and image sequences. We propose a multi-frame joint conditional generation framework to effectively learn the correlation between the illumination change of an outdoor scene and the time of the day. We further present a multi-domain training scheme for robust training of our generative models from two datasets with different distributions and missing timestamp labels. Compared to alternative time-lapse video synthesis algorithms, our method uses the timestamp as the control variable and does not require a reference video to guide the synthesis of the final output. We conduct ablation studies to validate our algorithm and compare with state-of-the-art techniques both qualitatively and quantitatively. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CVPR.2019.00150 | 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) |
Field | DocType | Volume |
Generative adversarial network,Computer science,End-to-end principle,Control variable,Timestamp,Artificial intelligence,Generative grammar,Machine learning,Deep neural networks | Journal | abs/1904.00680 |
ISSN | Citations | PageRank |
1063-6919 | 5 | 0.41 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Seonghyeon Nam | 1 | 28 | 3.05 |
Chongyang Ma | 2 | 257 | 19.21 |
Menglei Chai | 3 | 191 | 14.24 |
William Brendel | 4 | 396 | 15.12 |
Ning Xu | 5 | 184 | 20.03 |
Seon Joo Kim | 6 | 455 | 31.34 |