Abstract | ||
---|---|---|
We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution. To incentivise an informative latent representation of the data, we formulate the learning problem as a constrained optimisation problem by extending the Taming VAEs framework to two-level hierarchical models. We introduce a graph-based interpolation method, which shows that the topology of the learned latent representation corresponds to the topology of the data manifold-and present several examples, where desired properties of latent representation such as smoothness and simple explanatory factors are learned by the prior. |
Year | Venue | Field |
---|---|---|
2019 | ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | Graph,Lagrangian,Interpolation,Algorithm,Standard normal table,Artificial intelligence,Smoothness,Prior probability,Machine learning,Mathematics |
DocType | Volume | ISSN |
Journal | 32 | 1049-5258 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alexej Klushyn | 1 | 0 | 1.69 |
Nutan Chen | 2 | 26 | 6.10 |
Richard Kurle | 3 | 0 | 2.70 |
Botond Cseke | 4 | 193 | 11.55 |
Patrick van der Smagt | 5 | 188 | 24.23 |