Abstract | ||
---|---|---|
We consider an online supervised learning problem, in which both the instances (input vectors) and the comparator (weight vector) are unconstrained. We exploit a natural scale invariance symmetry in our unconstrained setting: the predictions of the optimal comparator are invariant under any linear transformation of the instances. Our goal is to design online algorithms which also enjoy this property, i.e. are scale-invariant. We start with the case of coordinate-wise invariance, in which the individual coordinates (features) can be arbitrarily rescaled. We give an algorithm, which achieves essentially optimal regret bound in this setup, expressed by means of a coordinate-wise scale-invariant norm of the comparator. We then study general invariance with respect to arbitrary linear transformations. We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scale-invariant norm of the comparator in the worst case. Next, we compliment this result with a positive one, providing an algorithm which “almost” achieves the desired bound, incurring only a logarithmic overhead in terms of the relative size of the instances. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1016/j.tcs.2019.11.016 | Theoretical Computer Science |
Keywords | Field | DocType |
Online learning,Online convex optimization,Scale invariance,Unconstrained online learning,Linear classification,Regret bound | Discrete mathematics,Online algorithm,Scale invariance,Invariant (physics),Algorithm,Weight,Supervised learning,Linear map,Invariant (mathematics),Logarithm,Mathematics | Journal |
Volume | ISSN | Citations |
808 | 0304-3975 | 0 |
PageRank | References | Authors |
0.34 | 0 | 1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wojciech Kotlowski | 1 | 158 | 16.32 |