Title | ||
---|---|---|
Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions. |
Abstract | ||
---|---|---|
We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices. As applications that follow from our new technique, we show that any algorithm that learns $m$-variate homogeneous polynomial functions of degree at most $d$ over $mathbb{F}_2$ from evaluations on randomly chosen inputs either requires space $Omega(mn)$ or $2^{Omega(m)}$ time where $n=m^{Theta(d)}$ is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems. |
Year | Venue | DocType |
---|---|---|
2017 | Electronic Colloquium on Computational Complexity (ECCC) | Journal |
Volume | Citations | PageRank |
24 | 2 | 0.42 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Paul Beame | 1 | 2234 | 176.07 |
Shayan Oveis Gharan | 2 | 323 | 26.63 |
Xin Yang | 3 | 2 | 0.76 |