Title
On the Learnability of Random Deep Networks.
Abstract
In this paper we study the learnability of random deep networks both theoretically and experimentally. On the theoretical front, assuming the statistical query model, we show that the learnability of random deep networks with sign activation drops exponentially with their depths; under plausible conjectures, our results extend to ReLu and sigmoid activations. The core of the arguments is that even for highly correlated inputs, the outputs of deep random networks are near-orthogonal. On the experimental side, we find that the learnability of random networks drops sharply with depth even with the state-of-the-art training methods.
Year
DOI
Venue
2020
10.5555/3381089.3381113
SODA '20: ACM-SIAM Symposium on Discrete Algorithms Salt Lake City Utah January, 2020
Field
DocType
Citations 
Discrete mathematics,Computer science,Theoretical computer science,Learnability
Conference
1
PageRank 
References 
Authors
0.35
0
4
Name
Order
Citations
PageRank
Abhimanyu Das131422.43
Sreenivas Gollapudi2119864.70
Ravi Kumar3139321642.48
Rina Panigrahy43203269.05