Title
Improving latent variable descriptiveness by modelling rather than ad-hoc factors.
Abstract
Powerful generative models, particularly in natural language modelling, are commonly trained by maximizing a variational lower bound on the data log likelihood. These models often suffer from poor use of their latent variable, with ad-hoc annealing factors used to encourage retention of information in the latent variable. We discuss an alternative and general approach to latent variable modelling, based on an objective that encourages a perfect reconstruction by tying a stochastic autoencoder with a variational autoencoder (VAE). This ensures by design that the latent variable captures information about the observations, whilst retaining the ability to generate well. Interestingly, although our model is fundamentally different to a VAE, the lower bound attained is identical to the standard VAE bound but with the addition of a simple pre-factor; thus, providing a formal interpretation of the commonly used, ad-hoc pre-factors in training VAEs.
Year
DOI
Venue
2019
10.1007/s10994-019-05830-1
Machine Learning
Keywords
DocType
Volume
Generative modelling, Latent variable modelling, Variational autoencoders, Variational inference, Natural language processing
Journal
108
Issue
ISSN
Citations 
8
0885-6125
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Alex Mansbridge100.34
Roberto Fierimonte200.34
Ilya Feige313.47
David Barber440445.57