Abstract | ||
---|---|---|
We investigate the problem of training generative models on very sparse collections of 3D models. Particularly, instead of using difficult-to-obtain large sets of 3D models, we demonstrate that geometrically-motivated energy functions can be used to effectively augment and boost only a sparse collection of example (training) models. Technically, we analyze the Hessian of the as-rigid-as-possible (ARAP) energy to adaptively sample from and project to the underlying (local) shape space, and use the augmented dataset to train a variational autoencoder (VAE). We iterate the process, of building latent spaces of VAE and augmenting the associated dataset, to progressively reveal a richer and more expressive generative space for creating geometrically and semantically valid samples. We evaluate our method against a set of strong baselines, provide ablation studies, and demonstrate application towards establishing shape correspondences. Glassproduces multiple interesting and meaningful shape variations even when starting from as few as 3–10 training shapes. Our code is available at https://sanjeevmk.github.io/glass_webpage/. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/CVPR52688.2022.01800 | IEEE Conference on Computer Vision and Pattern Recognition |
Keywords | DocType | Volume |
Vision + graphics, Deep learning architectures and techniques | Conference | 2022 |
Issue | Citations | PageRank |
1 | 0 | 0.34 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sanjeev Muralikrishnan | 1 | 8 | 1.43 |
Siddhartha Chaudhuri | 2 | 665 | 29.31 |
Noam Aigerman | 3 | 215 | 12.60 |
Vladimir G. Kim | 4 | 961 | 41.44 |
Matthew Fisher | 5 | 757 | 36.98 |
Niloy J. Mitra | 6 | 3813 | 176.15 |