Title
Fast learning in networks of locally-tuned processing units
Abstract
We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988). We consider training such networks in a completely supervised manner, but abandon this approach in favor of a more computationally efficient hybrid learning method which combines self-organized and supervised learning. Our networks learn faster than backpropagation for two reasons: the local representations ensure that only a few units respond to any given input, thus reducing computational overhead, and the hybrid learning rules are linear rather than nonlinear, thus leading to faster convergence. Unlike many existing methods for data analysis, our network architecture and learning rules are truly adaptive and are thus appropriate for real-time use.
Year
DOI
Venue
1989
10.1162/neco.1989.1.2.281
Neural Computation
Keywords
Field
DocType
data analysis,network architecture,self organization,backpropagation,supervised learning,real time
Online machine learning,Competitive learning,Instance-based learning,Stability (learning theory),Semi-supervised learning,Computer science,Supervised learning,Unsupervised learning,Artificial intelligence,Feature learning,Machine learning
Journal
Volume
Issue
ISSN
1
2
0899-7667
Citations 
PageRank 
References 
1460
638.00
3
Authors
2
Search Limit
1001000
Name
Order
Citations
PageRank
John E. Moody122281880.81
Christian J. Darken21466641.23