Title
Compound Variational Auto-Encoder
Abstract
Amortized variational inference (AVI) enables efficient training of deep generative models to scale to large datasets. The quality of the approximate inference is determined by various reasons, such as the ability of producing proper variational parameters for each datapoint in the recognition network and whether the variational distribution matches the true posterior, etc. This paper focuses on the inference sub-optimality of variational auto-encoders (VAE), where the goal is to reduce the difference caused by amortizing the variational distribution parameters over the entire training set instead of optimizing for each training example individually, which is also known as the amortization gap. This paper extends Bayesian inference in VAE from the latent level to both latent and weight levels by adopting Bayesian neural networks (BNN) in the encoder, so that each datapoint obtains its own distribution for better modeling. The hybrid design in the proposed compound VAE is empirically demonstrated to be capable of mitigating the amortization gap.(1)
Year
DOI
Venue
2019
10.1109/ICASSP.2019.8683633
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Keywords
DocType
ISSN
VAE, Variational Inference, Bayesian
Conference
1520-6149
Citations 
PageRank 
References 
0
0.34
0
Authors
3
Name
Order
Citations
PageRank
Shang-Yu Su194.88
Shan-Wei Lin200.34
Yun-Nung Chen332435.41