Title
Learning Hierarchical Priors in VAEs.
Abstract
We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution. To incentivise an informative latent representation of the data, we formulate the learning problem as a constrained optimisation problem by extending the Taming VAEs framework to two-level hierarchical models. We introduce a graph-based interpolation method, which shows that the topology of the learned latent representation corresponds to the topology of the data manifold-and present several examples, where desired properties of latent representation such as smoothness and simple explanatory factors are learned by the prior.
Year
Venue
Field
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
Graph,Lagrangian,Interpolation,Algorithm,Standard normal table,Artificial intelligence,Smoothness,Prior probability,Machine learning,Mathematics
DocType
Volume
ISSN
Journal
32
1049-5258
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Alexej Klushyn101.69
Nutan Chen2266.10
Richard Kurle302.70
Botond Cseke419311.55
Patrick van der Smagt518824.23