Abstract | ||
---|---|---|
Many pattern recognition problems in the real world are complicated by noise and imperfect data. It is widely accepted that recognition of such data would be greatly aided by the use of contextual sensibility of the classification. One possible solution to this problem would require a multi-layered pattern classifying network with feedback mechanisms. Recognition of this possibility has led to the development of CLAM (contextual layered associative memory), an attempt to extend conventional one-layered models to permit layering. Layering has been made possible using an information-preserving probabilistic approach to simulate the effect of uncertainty within the network. The new network has been designed to be robust under noise while still maintaining enough flexibility to learn new patterns. This is achieved by a combination of a novel new-node generation algorithm and a simple resonance mechanism. The network has a fixed number of layers which are used to classify accumulated classifications from previous layers. The number of nodes in the network is flexible and a complete classification network can be grown from just a few seed nodes during the course of training. Connectivity is also flexible, connections are generated and maintained according to the demands of the training data. This network is not meant to be a complete solution to the problem of context-sensitive classification but a step towards making such networks possible. Its use is demonstrated in the recognition of planar objects given edge vectors. |
Year | DOI | Venue |
---|---|---|
1990 | 10.1016/0893-6080(90)90072-S | Neural Networks |
Keywords | Field | DocType |
flexible architectures,pattern classification,learning,neural nets,layered network,context sensitive pattern classification,information preservation | Training set,Data mining,Imperfect,Content-addressable memory,Computer science,Layering,Network simulation,Artificial intelligence,Probabilistic logic,Artificial neural network,Machine learning | Journal |
Volume | Issue | ISSN |
3 | 3 | Neural Networks |
Citations | PageRank | References |
8 | 5.34 | 2 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Neil A. Thacker | 1 | 517 | 72.16 |
John E. W. Mayhew | 2 | 233 | 322.10 |