Title
Stability of Randomized Learning Algorithms
Abstract
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learning algorithms to the case of randomized algorithms. We give formal definitions of stability for randomized algorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability. The setup we develop for this purpose can be also used for generally studying randomized learning algorithms. We then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learning algorithms.
Year
Venue
Keywords
2005
Journal of Machine Learning Research
bootstrap methods,randomized learning algorithms,random stability,generalization performance,estimated model,stability,formal definition,generalization error,sensitivity a nalysis,randomized algorithm,expected error,bagging,non-asymptotic bound,existing theory,general result,predictive performance,leave-one-out error.
Field
DocType
Volume
Training set,Randomized algorithm,Stability (learning theory),Empirical risk minimization,Algorithm,Artificial intelligence,Generalization error,Mathematics,Leave-one-out error,Machine learning,Randomized algorithms as zero-sum games
Journal
6,
ISSN
Citations 
PageRank 
1532-4435
33
1.60
References 
Authors
8
3
Name
Order
Citations
PageRank
Andre Elisseeff11546.62
Theodoros Evgeniou23005219.65
Massimiliano Pontil35820472.96