Title
Neural Networks with Marginalized Corrupted Hidden Layer.
Abstract
Overfitting is an important problem in neural networks (NNs) training. When the number of samples in the training set is limited, explicitly extending the training set with artificially generated samples is an effective solution. However, this method has the problem of high computational costs. In this paper we propose a new learning scheme to train single-hidden layer feedforward neural networks (SLFNs) with implicitly extended training set. The training set is extended by corrupting the hidden layer outputs of training samples with noise from exponential family distribution. When the number of corruption approaches infinity, in objective function explicitly generated samples can be expressed as the form of expectation. Our method, called marginalized corrupted hidden layer (MCHL), trains SLFNs by minimizing the loss function expected values under the corrupting distribution. In this way MCHL is trained with infinite samples. Experimental results on multiple data sets show that MCHL can be trained efficiently, and generalizes better to test data.
Year
DOI
Venue
2015
10.1007/978-3-319-26555-1_57
Lecture Notes in Computer Science
Keywords
Field
DocType
Neural network,Overfitting,Classification
Feedforward neural network,Pattern recognition,Computer science,Exponential family,Infinity,Expected value,Artificial intelligence,Test data,Overfitting,Train,Artificial neural network
Conference
Volume
ISSN
Citations 
9491
0302-9743
1
PageRank 
References 
Authors
0.36
7
3
Name
Order
Citations
PageRank
Yanjun Li121.40
Xin Xin2587.73
Ping Guo360185.05