Abstract | ||
---|---|---|
Recent advances in variational inference that make use of an inference or recognition network for training and evaluating deep directed graphical models have advanced well beyond traditional variational inference and Markov chain Monte Carlo methods. These techniques offer higher flexibility with simpler and faster inference; yet training and evaluation still remains a challenge. We propose a method for improving the per-example approximate posterior by iterative refinement, which can provide notable gains in maximizing the variational lower bound of the log likelihood and works with both continuous and discrete latent variables. We evaluate our approach as a method of training and evaluating directed graphical models. We show that, when used for training, iterative refinement improves the variational lower bound and can also improve the log-likelihood over related methods. We also show that iterative refinement can be used to get a better estimate of the log-likelihood in any directed model trained with mean-field inference. |
Year | Venue | Field |
---|---|---|
2015 | CoRR | Iterative refinement,Mathematical optimization,Markov chain Monte Carlo,Computer science,Inference,Upper and lower bounds,Latent variable,Artificial intelligence,Graphical model,Machine learning,Variational message passing |
DocType | Volume | Citations |
Journal | abs/1511.06382 | 2 |
PageRank | References | Authors |
0.42 | 14 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Devon Hjelm | 1 | 28 | 2.23 |
Kyunghyun Cho | 2 | 6803 | 316.85 |
Junyoung Chung | 3 | 1115 | 39.41 |
Ruslan Salakhutdinov | 4 | 12190 | 764.15 |
Vince D Calhoun | 5 | 2769 | 268.91 |
Nebojsa Jojic | 6 | 1397 | 165.68 |