Title
Dual-Mtgan: Stochastic And Deterministic Motion Transfer For Image-To-Video Synthesis
Abstract
Generating videos with content and motion variations is a challenging task in computer vision. While the recent development of GAN allows video generation from latent representations, it is not easy to produce videos with particular content of motion patterns of interest. In this paper, we propose Dual Motion Transfer GAN (Dual-MTGAN), which takes image and video data as inputs while learning disentangled content and motion representations. Our Dual-MTGAN is able to perform deterministic motion transfer and stochastic motion generation. Based on a given image, the former preserves the input content and transfers motion patterns observed from another video sequence, and the latter directly produces videos with plausible yet diverse motion patterns based on the input image. The proposed model is trained in an end-to-end manner, without the need to utilize pre-defined motion features like pose or facial landmarks. Our quantitative and qualitative results would confirm the effectiveness and robustness of our model in addressing such conditioned image-to-video tasks.
Year
DOI
Venue
2020
10.1109/ICPR48806.2021.9412781
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)
DocType
ISSN
Citations 
Conference
1051-4651
0
PageRank 
References 
Authors
0.34
24
4
Name
Order
Citations
PageRank
Fu-En Yang1122.60
Jing-Cheng Chang200.34
Yuan-Hao Lee300.68
Yu-Chiang Frank Wang491461.63