Abstract | ||
---|---|---|
In this paper we develop a dynamic form of Bayesian optimization for machine learning models with the goal of rapidly finding good hyperparameter settings. Our method uses the partial information gained during the training of a machine learning model in order to decide whether to pause training and start a new model, or resume the training of a previously-considered model. We specifically tailor our method to machine learning problems by developing a novel positive-definite covariance kernel to capture a variety of training curves. Furthermore, we develop a Gaussian process prior that scales gracefully with additional temporal observations. Finally, we provide an information-theoretic framework to automate the decision process. Experiments on several common machine learning models show that our approach is extremely effective in practice. |
Year | Venue | Field |
---|---|---|
2014 | CoRR | Online machine learning,Hyperparameter optimization,Stability (learning theory),Active learning (machine learning),Hyperparameter,Computer science,Wake-sleep algorithm,Bayesian optimization,Artificial intelligence,Relevance vector machine,Machine learning |
DocType | Volume | Citations |
Journal | abs/1406.3896 | 30 |
PageRank | References | Authors |
1.18 | 12 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kevin Swersky | 1 | 1118 | 52.13 |
Jasper Snoek | 2 | 1051 | 62.71 |
Ryan P. Adams | 3 | 2286 | 131.88 |