Abstract | ||
---|---|---|
The dependence of the classification error on the size of a bagging ensemble can be modeled within the framework of Monte Carlo theory for ensemble learning. These error curves are parametrized in terms of the probability that a given instance is misclassified by one of the predictors in the ensemble. Out of bootstrap estimates of these probabilities can be used to model generalization error curves using only information from the training data. Since these estimates are obtained using a finite number of hypotheses, they exhibit fluctuations. This implies that the modeled curves are biased and tend to overestimate the true generalization error. This bias becomes negligible as the number of hypotheses used in the estimator becomes sufficiently large. Experiments are carried out to analyze the consistency of the proposed estimator. |
Year | DOI | Venue |
---|---|---|
2007 | 10.1007/978-3-540-77226-2_6 | IDEAL |
Keywords | Field | DocType |
error curve,bootstrap estimation,finite number,bootstrap estimate,monte carlo theory,ensemble learning,proposed estimator,classification error,generalization error curve,true generalization error,generalization error,bagging ensemble,monte carlo | Monte Carlo method,Finite set,Monte Carlo algorithm,Parametrization,Bootstrapping (statistics),Artificial intelligence,Ensemble learning,Machine learning,Bootstrapping (electronics),Mathematics,Estimator | Conference |
Volume | ISSN | ISBN |
4881 | 0302-9743 | 3-540-77225-1 |
Citations | PageRank | References |
3 | 0.38 | 9 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Daniel Hernández-Lobato | 1 | 440 | 26.10 |
Gonzalo Martínez-Muñoz | 2 | 524 | 23.76 |
Alberto Suárez | 3 | 137 | 6.28 |