Title
An Information-Theoretic Analysis of Deep Latent-Variable Models.
Abstract
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.
Year
Venue
Field
2017
arXiv: Learning
Information theory,Autoencoder,MNIST database,Computer science,Upper and lower bounds,Inference,Latent variable,Unsupervised learning,Artificial intelligence,Generative grammar,Machine learning
DocType
Volume
Citations 
Journal
abs/1711.00464
1
PageRank 
References 
Authors
0.40
13
6
Name
Order
Citations
PageRank
Alexander A. Alemi1709.92
Ben Poole255452.06
Ian Fischer342226.82
Joshua V. Dillon4503.85
Rif Saurous514810.49
Michael Kuperberg67589529.66