Title
Variational Autoencoder With Optimizing Gaussian Mixture Model Priors.
Abstract
The latent variable prior of the variational autoencoder (VAE) often utilizes a standard Gaussian distribution because of the convenience in calculation, but has an underfitting problem. This paper proposes a variational autoencoder with optimizing Gaussian mixture model priors. This method utilizes a Gaussian mixture model to construct prior distribution, and utilizes the Kullback-Leibler (KL) distance between posterior and prior distribution to implement an iterative optimization of the prior distribution based on the data. The greedy algorithm is used to solve the KL distance for defining the approximate variational lower bound solution of the loss function, and for realizing the VAE with optimizing Gaussian mixture model priors. Compared with the standard VAE method, the proposed method obtains state-of-the-art results on MNIST, Omniglot, and Frey Face datasets, which shows that the VAE with optimizing Gaussian mixture model priors can learn a better model.
Year
DOI
Venue
2020
10.1109/ACCESS.2020.2977671
IEEE ACCESS
Keywords
DocType
Volume
Gaussian mixture model,Gaussian distribution,Training,Standards,Neural networks,Aggregates,Variational autoencoder,Gaussian mixture model,Kullback-Leibler distance
Journal
8
ISSN
Citations 
PageRank 
2169-3536
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Chunsheng Guo174.59
Jialuo Zhou200.34
Huahua Chen300.34
Na Ying400.34
Jianwu Zhang500.34
Di Zhou621.38