Title
Bounded Learning For Neural Network Ensembles
Abstract
Two error bounds were introduced in the learning process of balanced ensemble learning. They are the lower bound of error rate (LBER) and the upper bound of error output (UBEO) on the training set, respectively. These two error bounds would decide whether a training data point should be further learned or not after balanced ensemble learning has reached certain stage. Before the error rates are higher than LBER, the whole training set is fed to balanced ensemble learning. After the error rates are lower than LBER, not the whole training set but only those data points near to the learned decision boundary should be learned. Other data points further away from the decision boundary could either be learned well or not be learned at all. In order to cope with these not-yet-learned data far away from the learned decision boundary, balanced ensemble learning has too make so big changes to the learned decision boundary that the ensemble could grow too complex for the applications. Therefore, these not-yet-learned data should be excluded from the training set. There is no much impact to the learned decision boundary by removing those well-learned data points that are far away from the decision boundary. Experimental results would display how LBER and UBEO would let balanced ensemble learning avoid overfitting.
Year
DOI
Venue
2015
10.1109/ICInfA.2015.7279472
2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION
Keywords
Field
DocType
neural networks,training data,diabetes
Data point,Computer science,Upper and lower bounds,Word error rate,Artificial intelligence,Overfitting,Artificial neural network,Ensemble learning,Decision boundary,Machine learning,Bounded function
Conference
Citations 
PageRank 
References 
1
0.41
8
Authors
3
Name
Order
Citations
PageRank
Yong Liu12526265.08
Qiangfu Zhao221462.36
Yan Pei312522.89