Title
Learning, Storing, and Disentangling Correlated Patterns in Neural Networks.
Abstract
The brain encodes object relationship using correlated neural representations. Previous studies have revealed that it is a difficult task for neural networks to process correlated memory patterns; thus, strategies based on modified unsupervised Hebb rules have been proposed. Here, we explore a supervised strategy to learn correlated patterns in a recurrent neural network. We consider that a neural network not only learns to reconstruct a memory pattern, but also holds the pattern as an attractor long after the input cue is removed. Adopting backpropagation through time to train the network, we show that the network is able to store correlated patterns, and furthermore, when continuously morphed patterns are presented, the network acquires the structure of a continuous attractor neural network. By inducing spike frequency adaptation in the neural dynamics after training, we further demonstrate that the network has the capacities of anticipative tracking and disentangling superposed patterns. We hope that this study gives us insight into understanding how neural systems process correlated representations for objects.
Year
Venue
Field
2018
ICONIP
Attractor,Backpropagation through time,Object Relationship,Pattern recognition,Computer science,Recurrent neural network,Neural system,Spike frequency adaptation,Artificial intelligence,Artificial neural network
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
4
7
Name
Order
Citations
PageRank
xiaolong zou123.46
Zilong Ji201.01
Xiao Liu301.35
Tiejun Huang449.05
Yuanyuan Mi5115.04
Da-Hui Wang611.69
Si Wu74913.39