Title
Progressive Boosting for Classifier Committee Learning
Abstract
Most applications of artificial intelligence to tasks of practical importance are based on constructing a model of the knowledge used by a human expert. In a classification model, the connection between classes and properties can be defined by something as simple as a flowchart or as complex and unstructured as a procedures manual. Classifier committee learning methods generate multiple classifiers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the final classification. Two such methods are bagging and boosting for improving the predictive power of classifier learning systems. This paper studies a different approach progressive boosting of decision trees. Instead of sampling the same number of data points at each boosting iteration t, our progressive boosting algorithm draws n(t) data according to the sampling schedule. an empirical evaluation of a variant of this method shows that the progressive boosting can significantly reduce the error rate of decision tree learning. On average this is more accurate than bagging and boosting.
Year
DOI
Venue
2004
10.1007/978-3-540-30176-9_7
Lecture Notes in Computer Science
Keywords
Field
DocType
artificial intelligent,decision tree learning,decision tree,error rate
Decision tree,Computer science,Boosting (machine learning),Artificial intelligence,LPBoost,Margin classifier,Alternating decision tree,Decision tree learning,BrownBoost,Machine learning,Gradient boosting
Conference
Volume
ISSN
Citations 
3285
0302-9743
0
PageRank 
References 
Authors
0.34
3
5
Name
Order
Citations
PageRank
Md. Waselul Haque Sadid100.34
Nazrul Islam29612.24
Md. Shamsul Alam3825.90
Abu Sayeed Md. Sohail4264.01
Boshir Ahmed501.35