Abstract | ||
---|---|---|
Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1016/j.neunet.2011.01.006 | Neural Networks |
Keywords | Field | DocType |
function approximation,supervised learning,regression,backpropagation,neural network,activation function,multilayer perceptron | Feedforward neural network,Activation function,Algorithm,Probabilistic neural network,Time delay neural network,Multilayer perceptron,Artificial intelligence,Echo state network,Backpropagation,Artificial neural network,Machine learning,Mathematics | Journal |
Volume | Issue | ISSN |
24 | 5 | 0893-6080 |
Citations | PageRank | References |
4 | 0.46 | 11 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Rupert C. J. Minnett | 1 | 4 | 0.46 |
Andrew T. Smith | 2 | 60 | 9.22 |
William C. Lennon Jr. | 3 | 4 | 0.46 |
Robert Hecht-Nielsen | 4 | 448 | 110.50 |