Abstract | ||
---|---|---|
The manifold hypothesis states that many kinds of high-dimensional data are concentrated near a low-dimensional manifold. If the topology of this data manifold is non-trivial, a continuous encoder network cannot embed it in a one-to-one manner without creating holes of low density in the latent space. This is at odds with the Gaussian prior assumption typically made in Variational Auto-Encoders (VAEs), because the density of a Gaussian concentrates near a blob-like manifold. In this paper we investigate the use of manifold-valued latent variables. Specifically, we focus on the important case of continuously differentiable symmetry groups (Lie groups), such as the group of 3D rotations $operatorname{SO}(3)$. We show how a VAE with $operatorname{SO}(3)$-valued latent variables can be constructed, by extending the reparameterization trick to compact connected Lie groups. Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Machine Learning | Lie group,Mathematical optimization,Symmetry group,Pure mathematics,Latent variable,Gaussian,Odds,Mathematics,Manifold,Homeomorphism,Encoding (memory) |
DocType | Volume | Citations |
Journal | abs/1807.04689 | 2 |
PageRank | References | Authors |
0.37 | 4 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Luca Falorsi | 1 | 2 | 0.71 |
Pim de Haan | 2 | 2 | 2.74 |
Tim R. Davidson | 3 | 2 | 0.37 |
Nicola De Cao | 4 | 23 | 4.08 |
Maurice Weiler | 5 | 5 | 2.88 |
Patrick Forré | 6 | 4 | 3.47 |
Taco Cohen | 7 | 228 | 17.82 |