Abstract | ||
---|---|---|
In this paper, we investigate the degree to which the encoding of a $beta$-VAE captures label information across multiple architectures on Binary Static MNIST and Omniglot. Even though they are trained in a completely unsupervised manner, we demonstrate that a $beta$-VAE can retain a large amount of label information, even when asked to learn a highly compressed representation. |
Year | Venue | DocType |
---|---|---|
2018 | arXiv: Learning | Journal |
Volume | Citations | PageRank |
abs/1812.02682 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Emily Fertig | 1 | 4 | 2.41 |
Aryan Arbabi | 2 | 1 | 0.70 |
Alexander A. Alemi | 3 | 70 | 9.92 |