Abstract | ||
---|---|---|
Ensembles are often capable of greater prediction accuracy than any of their individual members. As a consequence of the diversity between individual base-learners, an ensemble will not suffer from overfitting. On the other hand, in many cases we are dealing with imbalanced data and a classifier which was built using all data has tendency to ignore minority class. As a solution to the problem, we propose to consider a large number of relatively small and balanced subsets where representatives from the larger pattern are to be selected randomly. As an outcome, the system produces the matrix of linear regression coefficients whose rows represent random subsets and columns represent features. Based on the above matrix we make an assessment of how stable the influence of the particular features is. It is proposed to keep in the model only features with stable influence. The final model represents an average of the base-learners, which are not necessarily a linear regression. Test results against datasets of the PAKDD-2007 data-mining competition are presented. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1007/978-3-642-10439-8_30 | Australasian Conference on Artificial Intelligence |
Keywords | Field | DocType |
final model,individual member,stable influence,imbalanced data,pakdd-2007 data-mining competition,linear regression,balanced subsets,ensemble approach,linear regression coefficient,random subsets,individual base-learners,random forest,data mining,decision trees,boosting | Row,Decision tree,Matrix (mathematics),Computer science,Boosting (machine learning),Artificial intelligence,Overfitting,Random forest,Classifier (linguistics),Machine learning,Linear regression | Conference |
Volume | ISSN | Citations |
5866 | 0302-9743 | 9 |
PageRank | References | Authors |
0.56 | 11 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Vladimir Nikulin | 1 | 99 | 17.28 |
McLachlan Geoffrey J. | 2 | 1787 | 126.70 |
Shu Kay Ng | 3 | 161 | 13.17 |