Title
Iterative Refinement of Approximate Posterior for Training Directed Belief Networks
Abstract
Recent advances in variational inference that make use of an inference or recognition network for training and evaluating deep directed graphical models have advanced well beyond traditional variational inference and Markov chain Monte Carlo methods. These techniques offer higher flexibility with simpler and faster inference; yet training and evaluation still remains a challenge. We propose a method for improving the per-example approximate posterior by iterative refinement, which can provide notable gains in maximizing the variational lower bound of the log likelihood and works with both continuous and discrete latent variables. We evaluate our approach as a method of training and evaluating directed graphical models. We show that, when used for training, iterative refinement improves the variational lower bound and can also improve the log-likelihood over related methods. We also show that iterative refinement can be used to get a better estimate of the log-likelihood in any directed model trained with mean-field inference.
Year
Venue
Field
2015
CoRR
Iterative refinement,Mathematical optimization,Markov chain Monte Carlo,Computer science,Inference,Upper and lower bounds,Latent variable,Artificial intelligence,Graphical model,Machine learning,Variational message passing
DocType
Volume
Citations 
Journal
abs/1511.06382
2
PageRank 
References 
Authors
0.42
14
6
Name
Order
Citations
PageRank
Devon Hjelm1282.23
Kyunghyun Cho26803316.85
Junyoung Chung3111539.41
Ruslan Salakhutdinov412190764.15
Vince D Calhoun52769268.91
Nebojsa Jojic61397165.68