Title
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
Abstract
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
Year
DOI
Venue
2022
10.3390/e24010059
ENTROPY
Keywords
DocType
Volume
neuronal coding, biologically plausible models, minimum description length, deep neural networks, normalization methods
Journal
24
Issue
ISSN
Citations 
1
1099-4300
0
PageRank 
References 
Authors
0.34
0
1
Name
Order
Citations
PageRank
Baihan Lin103.04