Title
Multi-level Monte Carlo Variational Inference.
Abstract
In many statistics and machine learning frameworks, stochastic optimization with high variance gradients has become an important problem. For example, the performance of Monte Carlo variational inference (MCVI) seriously depends on the variance of its stochastic gradient estimator. In this paper, we focused on this problem and proposed a novel framework of variance reduction using multi-level Monte Carlo (MLMC) method. The framework is naturally compatible with reparameterization gradient estimators, which are one of the efficient variance reduction techniques that use the reparameterization trick. We also proposed a novel MCVI algorithm for stochastic gradient estimation on MLMC method in which sample size $N$ is adaptively estimated according to the ratio of the variance and computational cost for each iteration. We furthermore proved that, in our method, the norm of the gradient could converge to $0$ asymptotically. Finally, we evaluated our method by comparing it with benchmark methods in several experiments and showed that our method was able to reduce gradient variance and sampling cost efficiently and be closer to the optimum value than the other methods were.
Year
DOI
Venue
2019
v22/20-653.html
arXiv: Machine Learning
DocType
Volume
Issue
Journal
abs/1902.00468
1
ISSN
Citations 
PageRank 
1532-4435
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Masahiro Fujisawa100.68
Issei Sato233141.59