Title
Bayesian spiking neurons II: learning.
Abstract
In the companion letter in this issue (“Bayesian Spiking Neurons I: Inference”), we showed that the dynamics of spiking neurons can be interpreted as a form of Bayesian integration, accumulating evidence over time about events in the external world or the body. We proceed to develop a theory of Bayesian learning in spiking neural networks, where the neurons learn to recognize temporal dynamics of their synaptic inputs. Meanwhile, successive layers of neurons learn hierarchical causal models for the sensory input. The corresponding learning rule is local, spike-time dependent, and highly nonlinear. This approach provides a principled description of spiking and plasticity rules maximizing information transfer, while limiting the number of costly spikes, between successive layers of neurons.
Year
DOI
Venue
2008
10.1162/neco.2008.20.1.118
Neural Computation
Keywords
Field
DocType
costly spike,companion letter,bayesian integration,information transfer,spiking neuron,bayesian spiking neurons ii,hierarchical causal model,corresponding learning rule,successive layer,external world,bayesian spiking,spiking neural network,bayesian learning,causal models
Bayesian inference,Random neural network,Computer science,Models of neural computation,Bayesian network,Learning rule,Artificial intelligence,Artificial neural network,Spiking neural network,Machine learning,Bayesian probability
Journal
Volume
Issue
ISSN
20
1
0899-7667
Citations 
PageRank 
References 
18
2.30
6
Authors
1
Name
Order
Citations
PageRank
Sophie Denève117217.55