Title
Learnable Explicit Density for Continuous Latent Space and Variational Inference.
Abstract
In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior. First, we decompose the learning of VAEs into layerwise density estimation, and argue that having a flexible prior is beneficial to both sample generation and inference. Second, we analyze the family of inverse autoregressive flows (inverse AF) and show that with further improvement, inverse AF could be used as universal approximation to any complicated posterior. Our analysis results in a unified approach to parameterizing a VAE, without the need to restrict ourselves to use factorial Gaussians in the latent real space.
Year
Venue
Field
2017
arXiv: Learning
Density estimation,Inverse,Autoregressive model,Mathematical optimization,Autoencoder,Inference,Factorial,Latent variable,Artificial intelligence,Prior probability,Machine learning,Mathematics
DocType
Volume
Citations 
Journal
abs/1710.02248
0
PageRank 
References 
Authors
0.34
8
7
Name
Order
Citations
PageRank
Chin-Wei Huang185.18
Ahmed Touati244.10
Laurent Dinh357027.53
Michal Drozdzal4172.79
Mohammad Havaei5283.47
Laurent Charlin663729.86
Aaron C. Courville76671348.46