Title
Solving the Ill-Conditioning in Neural Network Learning
Abstract
In this paper we investigate the feed-forward learning problem. The well-known ill-conditioning which is present in most feed-forward learning problems is shown to be the result of the structure of the network. Also, the well-known problem that weights between 'higher' layers in the network have to settle before 'lower' weights can converge is addressed. We present a solution to these problems by modifying the structure of the network through the addition of linear connections which carry shared weights. We call the new network structure the linearly augmented feed-forward network, and it is shown that the universal ap proximation theorems are still valid. Simulation experiments show the validity of the new method, and demonstrate that the new network is less sensitive to local minima and learns faster than the original network.
Year
DOI
Venue
1996
10.1007/3-540-49430-8_10
Neural Networks: Tricks of the Trade (2nd ed.)
Keywords
DocType
Volume
feed forward,neural network,local minima,simulation experiment
Conference
1524
ISSN
ISBN
Citations 
0302-9743
3-540-65311-2
8
PageRank 
References 
Authors
1.89
10
2
Name
Order
Citations
PageRank
P. Patrick Van Der Smagt127435.19
gerd hirzinger25185617.40