Title
Robustness in neural computation: random graphs and sparsity
Abstract
An attempt is made to mathematically codify the belief that fully interconnected neural networks continue to function efficiently in the presence of component damage. Component damage is introduced in a fully interconnected neural network model of n neurons by randomly deleting the links between neurons. An analysis of the outer-product algorithm for this random graph model of sparse interconnectivity yields the following result: If the probability of losing any given link between two neurons is 1- , then the outer-product algorithm can store on the order of pn/log pn2 stable memories correcting a linear number of random errors. In particular, the average degree of the interconnectivity graph dictates the memory storage capability, and functional storage of memories as stable states is feasible abruptly when the average number of neural interconnections retained by a neuron exceeds the order of log n links (of a total of n possible links) with other neurons
Year
DOI
Venue
1992
10.1109/18.135650
IEEE Transactions on Information Theory
Keywords
Field
DocType
content-addressable storage,graph theory,neural nets,associative memory,component damage,fully interconnected neural networks,functional storage,memory storage capability,neural computation,neuron,outer-product algorithm,random graph model,robustness,sparse interconnectivity,sparsity
Graph theory,Binary logarithm,Random graph,Interconnectivity,Computer science,Algorithm,Models of neural computation,Robustness (computer science),Content-addressable storage,Artificial neural network
Journal
Volume
Issue
ISSN
38
3
0018-9448
Citations 
PageRank 
References 
10
3.22
2
Authors
1
Name
Order
Citations
PageRank
Santosh S. Venkatesh138171.80