Title
Adversarial Training for Video Disentangled Representation.
Abstract
The strong demand for video analytics is largely due to the widespread application of CCTV. Perfectly encoding moving objects and scenes with different sizes and complexity in an unsupervised manner is still a challenge and seriously affects the quality of video prediction and subsequent analysis. In this paper, we introduce adversarial training to improve DrNet which disentangles a video with stationary scene and moving object representations, while taking the tiny objects and complex scene into account. These representations can be used for subsequent industrial applications such as vehicle density estimation, video retrieval, etc. Our experiment on LASIESTA database confirms the validity of this method in both reconstruction and prediction performance. Meanwhile, we propose an experiment that vanishes one of the codes and reconstructs the images by concatenating these zero and non-zero codes. This experiment separately evaluates the moving object and scene coding quality and shows that the adversarial training achieves a significant reconstruction quality in visual effect, despite of complex scene and tiny object.
Year
Venue
Field
2019
MMM
Density estimation,Computer vision,Video reconstruction,Coding quality,Video retrieval,Pattern recognition,Computer science,Artificial intelligence,Concatenation,Analytics,Encoding (memory),Adversarial system
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
18
7
Name
Order
Citations
PageRank
Renjie Xie154.16
Yuancheng Wang211.71
Xie Tian3556.43
Yuhao Zhang453.81
Xu Li529746.09
Jian Lu600.68
Qiao Wang79721.94