Title
Why does deep and cheap learning work so well?
Abstract
We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that variables cannot be multiplied using fewer than neurons in a single hidden layer.
Year
DOI
Venue
2016
https://doi.org/10.1007/s10955-017-1836-5
Journal of Statistical Physics
Keywords
Field
DocType
Artificial neural networks,Deep learning,Statistical physics
Principle of compositionality,Information theory,Locality,Polynomial,Theoretical computer science,Statistical process control,Artificial intelligence,Deep learning,Artificial neural network,Mathematics,Renormalization group
Journal
Volume
Issue
ISSN
abs/1608.08225
6
0022-4715
Citations 
PageRank 
References 
57
2.60
14
Authors
2
Name
Order
Citations
PageRank
Henry W. Lin1704.05
Max Tegmark216016.27