Title
Gaussian Process Prior Variational Autoencoders.
Abstract
Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.
Year
Venue
Keywords
2018
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)
gradient descent,gaussian process,covariance matrix,model specification,gaussian process prior variational autoencoders
DocType
Volume
ISSN
Conference
31
1049-5258
Citations 
PageRank 
References 
1
0.36
0
Authors
5
Name
Order
Citations
PageRank
Casale, Francesco Paolo150.74
Dalca Adrian V.224831.82
Luca Saglietti3224.31
Jennifer Listgarten412918.11
Nicoló Fusi517210.23