Title
Adversarial Symmetric Variational Autoencoder
Abstract
A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.
Year
Venue
DocType
2017
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017)
Journal
Volume
ISSN
Citations 
30
1049-5258
12
PageRank 
References 
Authors
0.54
28
7
Name
Order
Citations
PageRank
Yunchen Pu1888.55
Weiyao Wang2302.52
Ricardo Henao328623.85
Liqun Chen4284.77
Zhe Gan531932.58
Chunyuan Li646733.86
Lawrence Carin713711.38