Abstract | ||
---|---|---|
We would like to learn latent representations that are low-dimensional and highly interpretable. A model that has these characteristics is the Gaussian Process Latent Variable Model. The benefits and negative of the GP-LVM are complementary to the Variational Autoencoder, the former provides interpretable low-dimensional latent representations while the latter is able to handle large amounts of data and can use non-Gaussian likelihoods. Our inspiration for this paper is to marry these two approaches and reap the benefits of both. In order to do so we will introduce a novel approximate inference scheme inspired by the GP-LVM and the VAE. We show experimentally that the approximation allows the capacity of the generative bottle-neck (Z) of the VAE to be arbitrarily large without losing a highly interpretable representation, allowing reconstruction quality to be unlimited by Z at the same time as a low-dimensional space can be used to perform ancestral sampling from as well as a means to reason about the embedded data. |
Year | Venue | Field |
---|---|---|
2017 | neural information processing systems | Autoencoder,Pattern recognition,Approximate inference,Nonparametric inference,Sampling (statistics),Artificial intelligence,Generative grammar,Arbitrarily large,Mathematics,Machine learning,Encoding (memory),Bayes' theorem |
DocType | Volume | Citations |
Journal | abs/1712.06536 | 1 |
PageRank | References | Authors |
0.35 | 4 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Erik Bodin | 1 | 3 | 1.75 |
Iman Malik | 2 | 1 | 0.35 |
carl henrik ek | 3 | 327 | 30.76 |
Neill D. F. Campbell | 4 | 303 | 18.10 |