Title
Turning majority voting classifiers into a single decision tree
Abstract
This paper addresses the issues of intelligibility, classification speed, and required space in majority voting classifiers. Methods that classify unknown cases using multiple classifiers (e.g. bagging, boosting) have been actively studied in recent years. Since these methods classify a case by taking majority voting over the classifiers, the reasons behind the decision cannot be described in a logical form. Moreover, a large number of classifiers is needed to significantly improve the accuracy. This greatly increases the amount of time and space needed in classification. To solve these problems, a method for learning a single decision tree that approximates the majority voting classifiers is proposed in this paper. The proposed method generates if-then rules from each classifier, and then learns a single decision tree from these rules. Experimental results show that the decision trees by our method are considerably compact and have similar accuracy compared to bagging. Moreover, the proposed method is 8 to 24 times faster than bagging in classification
Year
DOI
Venue
1998
10.1109/TAI.1998.744847
Taipei
Keywords
Field
DocType
decision trees,learning by example,pattern classification,bagging,boosting,classification speed,decision tree,experimental results,if-then rules,intelligibility,learning,majority voting classifiers,multiple classifiers,unknown cases
Decision tree,Pattern recognition,Voting,Computer science,Random subspace method,Logical form,Artificial intelligence,Boosting (machine learning),Majority rule,Classifier (linguistics),Machine learning,Knowledge acquisition
Conference
ISSN
ISBN
Citations 
1082-3409
0-7803-5214-9
4
PageRank 
References 
Authors
0.66
1
3
Name
Order
Citations
PageRank
Yasuhiro Akiba114324.43
Shigeo Kaneda26926.85
Hussein Almuallim3547138.58