Title
Efficient Optimization for Sparse Gaussian Process Regression.
Abstract
We propose an efficient optimization algorithm to select a subset of training data as the inducing set for sparse Gaussian process regression. Previous methods either use different objective functions for inducing set and hyperparameter selection, or else optimize the inducing set by gradient-based continuous optimization. The former approaches are harder to interpret and suboptimal, whereas the latter cannot be applied to discrete input domains or to kernel functions that are not differentiable with respect to the input. The algorithm proposed in this work estimates an inducing set and the hyperparameters using a single objective. It can be used to optimize either the marginal likelihood or a variational free energy. Space and time complexity are linear in training set size, and the algorithm can be applied to large regression problems on discrete or continuous domains. Empirical evaluation shows state-of-art performance in discrete cases, competitive prediction results as well as a favorable trade-off between training and test time in continuous cases.
Year
DOI
Venue
2013
10.1109/TPAMI.2015.2424873
Pattern Analysis and Machine Intelligence, IEEE Transactions  
Keywords
DocType
Volume
gaussian process regression,low rank,matrix factorization,sparsity
Conference
abs/1310.6007
Issue
ISSN
Citations 
12
1939-3539
11
PageRank 
References 
Authors
0.58
19
4
Name
Order
Citations
PageRank
Cao, Yanshuai1627.43
Marcus A. Brubaker220817.33
David J. Fleet35236550.74
Aaron Hertzmann46002352.67