Title
Regularizing Generative Adversarial Networks under Limited Data
Abstract
Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available. The source code is available at https://github.com/google/lecam-gan.
Year
DOI
Venue
2021
10.1109/CVPR46437.2021.00783
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021
DocType
ISSN
Citations 
Conference
1063-6919
0
PageRank 
References 
Authors
0.34
6
5
Name
Order
Citations
PageRank
Hung-Yu Tseng1816.56
Jiang Lu275537.16
Ce Liu33347188.04
Yang Ming-Hsuan415303620.69
Weilong Yang500.68