Title
AdaGAN: Boosting Generative Models.
Abstract
Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.
Year
Venue
DocType
2017
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017)
Conference
Volume
ISSN
Citations 
30
1049-5258
23
PageRank 
References 
Authors
1.31
6
6
Name
Order
Citations
PageRank
Ilya O. Tolstikhin1558.48
Sylvain Gelly276059.74
Olivier Bousquet34593359.65
Carl-Johann Simon-Gabriel4373.61
Bernhard Schölkopf5231203091.82
SIMON-GABRIEL, Carl-Johann6231.31