Title
Explorations in Homeomorphic Variational Auto-Encoding.
Abstract
The manifold hypothesis states that many kinds of high-dimensional data are concentrated near a low-dimensional manifold. If the topology of this data manifold is non-trivial, a continuous encoder network cannot embed it in a one-to-one manner without creating holes of low density in the latent space. This is at odds with the Gaussian prior assumption typically made in Variational Auto-Encoders (VAEs), because the density of a Gaussian concentrates near a blob-like manifold. In this paper we investigate the use of manifold-valued latent variables. Specifically, we focus on the important case of continuously differentiable symmetry groups (Lie groups), such as the group of 3D rotations $operatorname{SO}(3)$. We show how a VAE with $operatorname{SO}(3)$-valued latent variables can be constructed, by extending the reparameterization trick to compact connected Lie groups. Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.
Year
Venue
Field
2018
arXiv: Machine Learning
Lie group,Mathematical optimization,Symmetry group,Pure mathematics,Latent variable,Gaussian,Odds,Mathematics,Manifold,Homeomorphism,Encoding (memory)
DocType
Volume
Citations 
Journal
abs/1807.04689
2
PageRank 
References 
Authors
0.37
4
7
Name
Order
Citations
PageRank
Luca Falorsi120.71
Pim de Haan222.74
Tim R. Davidson320.37
Nicola De Cao4234.08
Maurice Weiler552.88
Patrick Forré643.47
Taco Cohen722817.82