Title
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.
Abstract
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https://arxiv.org/abs/1606.03657.
Year
Venue
DocType
2016
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016)
Conference
Volume
ISSN
Citations 
29
1049-5258
321
PageRank 
References 
Authors
8.31
17
6
Search Limit
100321
Name
Order
Citations
PageRank
Xi Chen1164954.94
Yan Duan277527.97
Rein Houthooft360021.07
John Schulman4180666.95
Ilya Sutskever5258141120.24
Pieter Abbeel66363376.48