Abstract | ||
---|---|---|
Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem. This is especially troublesome given the lack of an evaluation metric that can reliably detect non-convergent behaviors. We leverage the notion of duality gap from game theory in order to propose a novel convergence metric for GANs that has low computational cost. We verify the validity of the proposed metric for various test scenarios commonly used in the literature. |
Year | Venue | DocType |
---|---|---|
2018 | arXiv: Learning | Journal |
Volume | Citations | PageRank |
abs/1811.05512 | 1 | 0.35 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Paulina Grnarova | 1 | 15 | 2.08 |
Kfir Y. Levy | 2 | 72 | 8.77 |
Aurelien Lucchi | 3 | 2419 | 89.45 |
Nathanael Perraudin | 4 | 134 | 13.56 |
Thomas Hofmann | 5 | 10064 | 1001.83 |
Andreas Krause | 6 | 5822 | 368.37 |