Title
Coverage and Quality Driven Training of Generative Image Models.
Abstract
Generative modeling of natural images has been extensively studied in recent years, yielding remarkable progress. Current state-of-the-art methods are either based on maximum likelihood estimation or adversarial training. Both methods have their own drawbacks, which are complementary in nature. The first leads to over-generalization as the maximum likelihood criterion encourages models to cover the support of the training data by heavily penalizing small masses assigned to training data. Simplifying assumptions in such models limits their capacity and makes them spill mass on unrealistic samples. The second leads to mode-dropping since adversarial training encourages high quality samples from the model, but only indirectly enforces diversity among the samples. To overcome these drawbacks we make two contributions. First, we propose a novel extension to the variational autoencoders model by using deterministic invertible transformation layers to map samples from the decoder to the image space. This induces correlations among the pixels given the latent variables, improving over commonly used factorial decoders. Second, we propose a training approach that leverages coverage and quality based criteria. Our models obtain likelihood scores competitive with state-of-the-art likelihood-based models, while achieving sample quality typical of adversarially trained networks.
Year
Venue
DocType
2019
CoRR
Journal
Volume
Citations 
PageRank 
abs/1901.01091
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Konstantin Shmelkov100.68
Thomas Lucas221.38
Karteek Alahari392240.31
Cordelia Schmid4285811983.22
J. J. Verbeek53944181.44