Title
Natural Neural Networks
Abstract
We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.
Year
Venue
Field
2015
Annual Conference on Neural Information Processing Systems
Convergence (routing),Natural gradient,Computer science,Matrix (mathematics),Supervised learning,Artificial intelligence,Artificial neural network,Machine learning,Speedup,Computation,Scalability
DocType
Volume
ISSN
Journal
abs/1507.00210
1049-5258
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Guillaume Desjardins149027.99
Karen Simonyan212058446.84
Razvan Pascanu32596199.21
Koray Kavukcuoglu410189504.11