Title
Reduction Of Difference Among Trained Neural Networks By Re-Learning
Abstract
It is often that the learned neural networks end with different decision boundaries under the variations of training data, learning algorithms, architectures, and initial random weights. Such variations are helpful in designing neural network ensembles, but are harmful for making unstable performances, i.e., large variances among different learnings. This paper discusses how to reduce such variances for learned neural networks by letting them re-learn on those data points on which they disagrees with each other. Experimental results have been conducted on four real world applications to explain how and when such re-learning works.
Year
DOI
Venue
2008
10.1109/IJCNN.2008.4634054
2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8
Keywords
Field
DocType
decision boundary,artificial neural networks,heart,neural networks,diabetes,cancer,neural nets,stability,training data,neural network,testing
Data point,Training set,Computer science,Artificial intelligence,Artificial neural network,Decision boundary,Machine learning
Conference
ISSN
Citations 
PageRank 
2161-4393
0
0.34
References 
Authors
4
1
Name
Order
Citations
PageRank
Yong Liu12526265.08