Abstract | ||
---|---|---|
Class-conditional variants of Generative adversarial networks (GANs) have recently achieved a great success due to its ability of selectively generating samples for given classes, as well as improving generation quality. However, its training requires a large set of class-labeled data, which is often expensive and difficult to collect in practice. In this paper, we propose an active sampling method to reduce the labeling cost for effectively training the class-conditional GANs. On one hand, the most useful examples are selected for external human labeling to jointly reduce the difficulty of model learning and alleviate the missing of adversarial training; on the other hand, fake examples are actively sampled for internal model retraining to enhance the adversarial training between the discriminator and generator. By incorporating the two strategies into a unified framework, we provide a cost-effective approach to train class-conditional GANs, which achieves higher generation quality with less training examples. Experiments on multiple datasets, diverse GAN configurations and various metrics demonstrate the effectiveness of our approaches.
|
Year | DOI | Keywords |
---|---|---|
2019 | 10.1145/3292500.3330883 | active sampling, class-conditional GANs, generative adversarial networks |
Field | DocType | ISSN |
Discriminator,Computer science,Sampling (statistics),Artificial intelligence,Generative grammar,Retraining,Machine learning,Internal model,Adversarial system,Model learning | Conference | 978-1-4503-6201-6 |
ISBN | Citations | PageRank |
978-1-4503-6201-6 | 1 | 0.35 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ming-Kun Xie | 1 | 5 | 2.81 |
Sheng-Jun Huang | 2 | 475 | 27.21 |