Title
Unsupervised video-to-video translation with preservation of frame modification tendency
Abstract
Tremendous advances have been achieved in image translation with the employment of generative adversarial networks (GANs). With respect to video-to-video translation, similar idea has been leveraged by various researches, which may focus on the associations among relevant frames. However, the existing video-synthesis methods based on GANs do not make full exploitation of the spatial–temporal information in videos, especially in the continuous frames. In this paper, we propose an efficient method to conduct video translation that can preserve the frame modification trends in sequential frames of the original video and smooth the variations between the generated frames. To constrain the consistency of the mentioned tendency between the generated video and the original one, we propose a tendency-invariant loss to impel further exploitation of spatial-temporal information. Experiments show that our method is able to learn more abundant information of adjacent frames and generate more desirable videos than the baselines, i.e., Recycle-GAN and CycleGAN.
Year
DOI
Venue
2020
10.1007/s00371-020-01913-6
The Visual Computer
Keywords
DocType
Volume
Video translation, Generative adversarial networks, Unsupervised, Spatial-temporal information
Journal
36
Issue
ISSN
Citations 
10
0178-2789
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Huajun Liu162.79
Chao Li200.68
Dian Lei301.69
Zhu Qing474.87