Title
Invariance of Weight Distributions in Rectified MLPs.
Abstract
An interesting approach to analyzing and developing tools for neural networks that has received renewed attention is to examine the equivalent kernel of the neural network. This is based on the fact that a fully connected feedforward network with one hidden layer, a certain weight distribution, an activation function, and an infinite number of neurons is a mapping that can be viewed as a projection into a Hilbert space. We show that the equivalent kernel of an MLP with ReLU or Leaky ReLU activations for all rotationally-invariant weight distributions is the same, generalizing a previous result that required Gaussian weight distributions. We derive the equivalent kernel for these cases. In deep networks, the equivalent kernel approaches a pathological fixed point, which can be used to argue why training randomly initialized networks can be difficult. Our results also have implications for weight initialization and the level sets in neural network cost functions.
Year
Venue
DocType
2018
ICML
Conference
Volume
Citations 
PageRank 
abs/1711.09090
2
0.42
References 
Authors
28
3
Name
Order
Citations
PageRank
Russell Tsuchida131.10
Farbod Roosta-Khorasani21029.25
Marcus Gallagher331.10