Title
Continual Learning in Generative Adversarial Nets.
Abstract
Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions. While the employed learning procedures typically assume that training data is drawn i.i.d. from the distribution of interest, it may be desirable to model distinct distributions which are observed sequentially, such as when different classes are encountered over time. Although conditional variations of deep generative models permit multiple distributions to be modeled by a single network in a disentangled fashion, they are susceptible to catastrophic forgetting when the distributions are encountered sequentially. In this paper, we adapt recent work in reducing catastrophic forgetting to the task of training generative adversarial networks on a sequence of distinct distributions, enabling continual generative modeling.
Year
Venue
Field
2017
arXiv: Learning
Training set,Forgetting,Computer science,Generative systems,Generative modeling,Artificial intelligence,Generative grammar,Machine learning,Adversarial system
DocType
Volume
Citations 
Journal
abs/1705.08395
6
PageRank 
References 
Authors
0.46
5
4
Name
Order
Citations
PageRank
Ari Seff150824.31
Alex Beatson284.56
Daniel Suo360.46
Han Liu443442.70