Title
Self-poised ensemble learning
Abstract
This paper proposes a new approach to train ensembles of learning machines in a regression context. At each iteration a new learner is added to compensate the error made by the previous learner in the prediction of its training patterns. The algorithm operates directly over values to be predicted by the next machine to retain the ensemble in the target hypothesis and to ensure diversity. We expose a theoretical explanation which clarifies what the method is doing algorithmically and allows to show its stochastic convergence. Finally, experimental results are presented to compare the performance of this algorithm with boosting and bagging in two well-known data sets.
Year
DOI
Venue
2005
10.1007/11552253_25
IDA
Keywords
Field
DocType
theoretical explanation,regression context,target hypothesis,self-poised ensemble learning,stochastic convergence,training pattern,next machine,new approach,previous learner,new learner,ensemble learning
Convergence (routing),Data set,Regression,Computer science,Boosting (machine learning),Generalization error,Artificial intelligence,Ensemble learning,Machine learning
Conference
Volume
ISSN
ISBN
3646
0302-9743
3-540-28795-7
Citations 
PageRank 
References 
1
0.37
13
Authors
4
Name
Order
Citations
PageRank
Ricardo Ñanculef15310.64
Carlos Valle2218.20
Héctor Allende314831.69
Claudio Moraga4612100.27