Title
Edge Guided Generation Network for Video Prediction
Abstract
Video prediction is a challenging problem due to the highly complex variation of video appearance and motions. Traditional methods that directly predict pixel values often result in blurring and artifacts. Furthermore, cumulative errors can lead to a sharp drop of prediction quality in long-term prediction. To alleviate the above problems, we propose a novel edge guided video prediction network, which firstly models the dynamic of frame edges and predicts the future frame edges, then generates the future frames under the guidance of the obtained future frame edges. Specifically, our network consists of two modules that are ConvLSTM based edge prediction module and the edge guided frames generation module. The whole network is differentiable and can be trained end-to-end without any supervision effort. Extensive experiments on KTH human action dataset and challenging autonomous driving KITTI dataset demonstrate that our method achieves better results than state-of-the-art methods especially in long-term video predictions.
Year
DOI
Venue
2018
10.1109/ICME.2018.8486602
2018 IEEE International Conference on Multimedia and Expo (ICME)
Keywords
Field
DocType
Video prediction,deep learning,spatial-temporal network,image generation
Computer vision,Image generation,Pattern recognition,Computer science,Differentiable function,Pixel,Artificial intelligence,Deep learning
Conference
ISSN
ISBN
Citations 
1945-7871
978-1-5386-1738-0
0
PageRank 
References 
Authors
0.34
6
5
Name
Order
Citations
PageRank
Kai Xu15620.13
Guorong Li219619.93
Huijuan Xu323912.33
Weigang Zhang4458.53
Qingming Huang53919267.71