Abstract | ||
---|---|---|
Without guidance on the unseen data, learning models could possibly approximate the known data by having different output on those unseen data. The results of such differences are the large variances in learning. Such large variances could lead to overfitting on many noisy data. This paper proposed one way of guidance by setting a middle value on the unknown data in balanced ensemble learning. Although balanced ensemble learning could learn faster and better than negative correlation learning, it also carried higher risk of overfitting in case of having limited number of training data points. Experimental results were conducted to show how such random learning could regulate the variances in balanced ensemble learning. |
Year | DOI | Venue |
---|---|---|
2012 | 10.1007/978-3-642-34289-9_42 | Communications in Computer and Information Science |
Field | DocType | Volume |
Training set,Negative correlation,Noisy data,Computer science,Artificial intelligence,Learning models,Overfitting,Ensemble learning,Machine learning | Conference | 316 |
ISSN | Citations | PageRank |
1865-0929 | 0 | 0.34 |
References | Authors | |
4 | 1 |