Abstract | ||
---|---|---|
Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper is available at github.com/sparseMCMC. |
Year | Venue | Field |
---|---|---|
2015 | Annual Conference on Neural Information Processing Systems | Covariance function,Markov chain Monte Carlo,Computer science,Gaussian,Artificial intelligence,Gaussian process,Probabilistic logic,Replicate,Machine learning,Covariance,Computation |
DocType | Volume | ISSN |
Conference | 28 | 1049-5258 |
Citations | PageRank | References |
12 | 0.69 | 22 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
James Hensman | 1 | 265 | 20.05 |
Alexander G. de G. Matthews | 2 | 49 | 2.72 |
Maurizio Filippone | 3 | 618 | 39.58 |
Zoubin Ghahramani | 4 | 10455 | 1264.39 |