Title
The Sample Complexity of Optimizing a Convex Function.
Abstract
In this paper we study optimization from samples of convex functions. There are many scenarios in which we do not know the function we wish to optimize but can learn it from data. In such cases, we are interested in bounding the number of samples required to optimize the function. Our main result shows that in general, the number of samples required to obtain a non-trivial approximation to the optimum of a convex function is exponential in its dimension, even when the function is PAC-learnable. We also obtain strong lower bounds for strongly convex and Lipschitz continuous functions. On the positive side, we show that there are interesting classes of functions and distributions for which the sample complexity is polynomial in the dimension of the function.
Year
Venue
Field
2017
COLT
Mathematical optimization,Convex combination,Computer science,Effective domain,Quasiconvex function,Random coordinate descent,Conic optimization,Proper convex function,Convex optimization,Linear matrix inequality
DocType
Citations 
PageRank 
Conference
1
0.36
References 
Authors
9
2
Name
Order
Citations
PageRank
eric balkanski1386.13
Yaron Singer251637.15