Title
Mixpred: Video Prediction Beyond Optical Flow
Abstract
Video prediction is a meaningful task for it has a wide range of application scenarios. And it is also a challenging task since it needs to learn the internal representation of a given video for both appearance and motion dynamics. The existing methods regard this problem as a spatiotemporal sequence forecasting problem and try to resolve it in a one-shot fashion, which causes the prediction result being blurry or inaccurate. So, a more intuitional thought is to split this problem into two parts: model the dynamic pattern of the given video and learn the appearance representation of given video frames. In this paper, we develop a novel network structure named MixPred based on this idea to address this issue. We divide the prediction problem into two parts as mentioned above and build two subnets to solve these two parts separately. Instead of fusing the results of subnets at the final layer, we put forward a parallel interaction style through the whole process to merge the dynamic information and content information in a more natural way. Besides, we propose three different connection methods for exploring the most effective connection structure. We trained the model on UCF-101 and KITTI, and testing our model on UCF-101, KITTI, and Caltech. The results demonstrate that our method achieves state-of-the-art both quantitatively and qualitatively.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2961383
IEEE ACCESS
Keywords
DocType
Volume
End-to-end training, parallel interaction, spatial subnet, temporal subnet, video prediction
Journal
7
ISSN
Citations 
PageRank 
2169-3536
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Jie Yan100.34
Guihe Qin2239.00
Rui Zhao321.04
Yanhua Liang400.34
Qianyi Xu500.34