Title
A New Formulation for Feedforward Neural Networks
Abstract
Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.
Year
DOI
Venue
2011
10.1109/TNN.2011.2163169
IEEE Transactions on Neural Networks
Keywords
Field
DocType
feedforward neural nets,function approximation,generalisation (artificial intelligence),learning (artificial intelligence),optimisation,ReNN approach,black box model,derivative free optimization algorithm,error response surface,feedforward neural network,function approximation techniques,generalization ability,geometrical interpretation,learning ability,neural network functional geometry,reformulated neural network,training method,Feedforward neural networks,generalization,geometrical interpretation,internal behavior,measure of regularization,reformulated neural network,training
Feedforward neural network,Computer science,Stochastic neural network,Recurrent neural network,Probabilistic neural network,Types of artificial neural networks,Time delay neural network,Artificial intelligence,Deep learning,Machine learning,Catastrophic interference
Journal
Volume
Issue
ISSN
22
10
1045-9227
Citations 
PageRank 
References 
26
1.61
26
Authors
2
Name
Order
Citations
PageRank
Saman Razavi1261.61
Bryan A. Tolson2816.66