Abstract | ||
---|---|---|
An important task in computational statistics and machine learning is to approximate a posterior distribution $p(x)$ with an empirical measure supported on a set of representative points $\{x_i\}_{i=1}^n$. This paper focuses on methods where the selection of points is essentially deterministic, with an emphasis on achieving accurate approximation when $n$ is small. To this end, we present `Stein Points'. The idea is to exploit either a greedy or a conditional gradient method to iteratively minimise a kernel Stein discrepancy between the empirical measure and $p(x)$. Our empirical results demonstrate that Stein Points enable accurate approximation of the posterior at modest computational cost. In addition, theoretical results are provided to establish convergence of the method. |
Year | Venue | DocType |
---|---|---|
2018 | ICML | Conference |
Volume | Citations | PageRank |
abs/1803.10161 | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wilson Ye Chen | 1 | 5 | 1.12 |
Lester W. Mackey | 2 | 73 | 5.52 |
Jackson Gorham | 3 | 0 | 0.68 |
François-Xavier Briol | 4 | 19 | 3.96 |
Chris J. Oates | 5 | 69 | 11.28 |