Title
Convergence of gradient method with penalty for Ridge Polynomial neural network.
Abstract
In this paper, a penalty term is added to the conventional error function to improve the generalization of the Ridge Polynomial neural network. In order to choose appropriate learning parameters, we propose a monotonicity theorem and two convergence theorems including a weak convergence and a strong convergence for the synchronous gradient method with penalty for the neural network. The experimental results of the function approximation problem illustrate the above theoretical results are valid.
Year
DOI
Venue
2012
10.1016/j.neucom.2012.05.022
Neurocomputing
Keywords
Field
DocType
Ridge Polynomial neural network,Gradient algorithm,Monotonicity,Convergence
Convergence (routing),Gradient method,Weak convergence,Mathematical optimization,Normal convergence,Compact convergence,Convergence tests,Artificial intelligence,Artificial neural network,Machine learning,Mathematics,Modes of convergence
Journal
Volume
Issue
ISSN
97
null
0925-2312
Citations 
PageRank 
References 
2
0.37
6
Authors
2
Name
Order
Citations
PageRank
Xin Yu183.18
Qingfeng Chen2256.74