Title
Static2Dynamic: Video Inference From a Deep Glimpse
Abstract
In this article, we address a novel and challenging task of video inference, which aims to infer video sequences from given non-consecutive video frames. Taking such frames as the anchor inputs, our focus is to recover possible video sequence outputs based on the observed anchor frames at the associated time. With the proposed Stochastic and Recurrent Conditional GAN (SR-cGAN), we are able to preserve visual content across video frames with additional ability to handle possible temporal ambiguity. In the experiments, we show that our SR-cGAN not only produces preferable video inference results, it can also be applied to relevant tasks of video generation, video interpolation, video inpainting, and video prediction.
Year
DOI
Venue
2020
10.1109/TETCI.2020.2968599
IEEE Transactions on Emerging Topics in Computational Intelligence
Keywords
DocType
Volume
Task analysis,Video sequences,Gallium nitride,Interpolation,Stochastic processes,Visualization,Generative adversarial networks
Journal
4
Issue
Citations 
PageRank 
4
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Yu-Ying Yeh1151.99
Yen-Cheng Liu2487.12
Wei-Chen Chiu300.68
Yu-Chiang Frank Wang491461.63