Title
Continuous Neural Networks
Abstract
This article extends neural networks to the case of an uncountable number of hidden units, in several ways. In the rst approach proposed, a nite parametrization is possi- ble, allowing gradient-based learning. While having the same number of parameters as an ordinary neural network, its internal struc- ture suggests that it can represent some smooth functions much more compactly. Un- der mild assumptions, we also nd better er- ror bounds than with ordinary neural net- works. Furthermore, this parametrization may help reducing the problem of satura- tion of the neurons. In a second approach, the input-to-hidden weights are fully non- parametric, yielding a kernel machine for which we demonstrate a simple kernel for- mula. Interestingly, the resulting kernel ma- chine can be made hyperparameter-free and still generalizes in spite of an absence of ex- plicit regularization.
Year
Venue
Keywords
2007
AISTATS
neural network
Field
DocType
Volume
Mathematical optimization,Physical neural network,Computer science,Stochastic neural network,Algorithm,Recurrent neural network,Probabilistic neural network,Time delay neural network,Types of artificial neural networks,Kernel method,Artificial neural network
Journal
2
Citations 
PageRank 
References 
4
0.42
1
Authors
4
Name
Order
Citations
PageRank
Nicolas Le Roux11684145.19
universite de montreal2153.20
montreal quebec340.42
Yoshua Bengio4426773039.83