Title
Bagging with Adaptive Costs
Abstract
Ensemble methods have proved to be highly effective in improving the performance of base learners under most circumstances. In this paper, we propose a new algorithm that combines the merits of some existing techniques, namely bagging, arcing and stacking. The basic structure of the algorithm resembles bagging. However, the misclassification cost of each training point is repeatedly adjusted according to its observed out-of-bag vote margin. In this way, the method gains the advantage of arcing - building the classifier the ensemble needs - without fixating on potentially noisy points. Computational experiments show that this algorithm performs consistently better than bagging and arcing with linear and nonlinear base classifiers. In view of the characteristics of bacing, a hybrid ensemble learning strategy, which combines bagging and different versions of bacing, is proposed and studied empirically.
Year
DOI
Venue
2008
10.1109/TKDE.2007.190724
IEEE Trans. Knowl. Data Eng.
Keywords
Field
DocType
basic structure,hybrid ensemble,ensemble need,existing technique,adaptive costs,ensemble method,nonlinear base classifier,new algorithm,computational experiment,different version,base learner,data mining,boosting,bagging,ensemble learning,computer experiment,learning artificial intelligence,stacking,ensemble methods,voting
Data mining,Nonlinear system,Computer science,Adaptive method,Information extraction,Boosting (machine learning),Artificial intelligence,Classifier (linguistics),Ensemble learning,Machine learning
Journal
Volume
Issue
ISSN
20
5
1041-4347
ISBN
Citations 
PageRank 
0-7695-2278-5
2
0.39
References 
Authors
13
2
Name
Order
Citations
PageRank
Yi Zhang121410.52
W. Nick Street21828155.26