Title
Solving Approximate Wasserstein GANs to Stationarity.
Abstract
Generative Adversarial Networks (GANs) are one of the most practical strategies to learn data distributions. A popular GAN formulation is based on the use of Wasserstein distance as a metric between probability distributions. Unfortunately, minimizing the Wasserstein distance between the data distribution and the generative model distribution is a challenging problem as its objective is non-convex, non-smooth, and even hard to compute. In this work, we propose to use a smooth approximation of the Wasserstein GANs. We show that this smooth approximation is close to the original objective. Moreover, obtaining gradient information of this approximate formulation is computationally effortless and hence one can easily apply first order optimization methods to optimize this objective. Based on this observation, we proposed a class of algorithms with guaranteed theoretical convergence to stationarity. Unlike the original non-smooth objective, our proposed algorithm only requires solving the discriminator to approximate optimality. We applied our method to learning Gaussian mixtures on a grid and also to learning MNIST digits. Our method allows the use of powerful cost functions based on latent representations of the data, where this latent representation could also be optimized adversarially.
Year
Venue
Field
2018
arXiv: Learning
Convergence (routing),Mathematical optimization,Discriminator,MNIST database,First order,Algorithm,Probability distribution,Gaussian,Grid,Mathematics,Generative model
DocType
Volume
Citations 
Journal
abs/1802.08249
1
PageRank 
References 
Authors
0.35
11
4
Name
Order
Citations
PageRank
Maziar Sanjabi118913.81
Lei Jimmy Ba28887296.55
Meisam Razaviyayn391344.38
Lee, Jason D.471148.29