Abstract | ||
---|---|---|
Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant number of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of "tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, as well as neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We discuss and evaluate common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. |
Year | Venue | Field |
---|---|---|
2019 | international conference on machine learning | Normalization (statistics),Hyperparameter,Regularization (mathematics),Artificial intelligence,Generative grammar,Mathematics,Machine learning |
DocType | Citations | PageRank |
Conference | 7 | 0.43 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Karol Kurach | 1 | 234 | 13.37 |
Mario Lucic | 2 | 231 | 16.10 |
Xiaohua Zhai | 3 | 209 | 13.00 |
Michalski, Marcin | 4 | 87 | 2.77 |
Sylvain Gelly | 5 | 760 | 59.74 |