Abstract | ||
---|---|---|
The Variational Auto-Encoder (VAE) model is a popular method to learn at once a generative model and embeddings for data living in a high-dimensional space. In the real world, many datasets may be assumed to be hierarchically structured. Traditionally, VAE uses a Euclidean latent space, but tree-like structures cannot be efficiently embedded in such spaces as opposed to hyperbolic spaces with negative curvature. We therefore endow VAE with a Poincaru0027e ball model of hyperbolic geometry and derive the necessary methods to work with two main Gaussian generalisations on that space. We empirically show better generalisation to unseen data than the Euclidean counterpart, and can qualitatively and quantitatively better recover hierarchical structures. |
Year | Venue | DocType |
---|---|---|
2019 | arXiv: Machine Learning | Journal |
Volume | Citations | PageRank |
abs/1901.06033 | 0 | 0.34 |
References | Authors | |
14 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Emile Mathieu | 1 | 0 | 0.68 |
Charline Le Lan | 2 | 0 | 1.35 |
Maddison, Chris J. | 3 | 1791 | 75.44 |
Ryota Tomioka | 4 | 1367 | 91.68 |
Yee Whye Teh | 5 | 6253 | 539.26 |