Title
How large should ensembles of classifiers be?
Abstract
We propose to determine the size of a parallel ensemble by estimating the minimum number of classifiers that are required to obtain stable aggregate predictions. Assuming that majority voting is used, a statistical description of the convergence of the ensemble prediction to its asymptotic (infinite size) limit is given. The analysis of the voting process shows that for most test instances the ensemble prediction stabilizes after only a few classifiers are polled. By contrast, a small but non-negligible fraction of these instances require large numbers of classifier queries to reach stable predictions. Specifically, the fraction of instances whose stable predictions require more than T classifiers for T@?1 has a universal form and is proportional to T^-^1^/^2. The ensemble size is determined as the minimum number of classifiers that are needed to estimate the infinite ensemble prediction at an average confidence level @a, close to one. This approach differs from previous proposals, which are based on determining the size for which the prediction error (not the predictions themselves) stabilizes. In particular, it does not require estimates of the generalization performance of the ensemble, which can be unreliable. It has general validity because it is based solely on the statistical description of the convergence of majority voting to its asymptotic limit. Extensive experiments using representative parallel ensembles (bagging and random forest) illustrate the application of the proposed framework in a wide range of classification problems. These experiments show that the optimal ensemble size is very sensitive to the particular classification problem considered.
Year
DOI
Venue
2013
10.1016/j.patcog.2012.10.021
Pattern Recognition
Keywords
Field
DocType
statistical description,ensemble size,representative parallel ensemble,parallel ensemble,minimum number,optimal ensemble size,ensemble prediction,majority voting,infinite ensemble prediction,stable prediction,ensemble learning,random forest,bagging
Ensembles of classifiers,Ensemble forecasting,Pattern recognition,Random subspace method,Cascading classifiers,Artificial intelligence,Majority rule,Classifier (linguistics),Random forest,Ensemble learning,Machine learning,Mathematics
Journal
Volume
Issue
ISSN
46
5
0031-3203
Citations 
PageRank 
References 
18
0.64
29
Authors
3
Name
Order
Citations
PageRank
Daniel Hernández-Lobato144026.10
Gonzalo Martínez-Muñoz252423.76
Alberto Suárez3676.28