Title
Minimal Expected Regret in Linear Quadratic Control
Abstract
We consider the problem of online learning in Linear Quadratic Control systems whose state transition and state-action transition matrices A and B may be initially unknown. We devise an online learning algorithm and provide guarantees on its expected regret. This regret at time T is upper bounded (i) by (O) over tilde((d(u) + d(x))root d(x)T) when A and B are unknown, (ii) by (O) over tilde (d(x)(2) log(T)) if only A is unknown, and (iii) by (O) over tilde (d(x)(d(u) + d(x))log(T)) if only B is unknown and under some mild non-degeneracy condition (d(x) and d(u) denote the dimensions of the state and of the control input, respectively). These regret scalings are minimal in T, d(x) and d(u) as they match existing lower bounds in scenario (i) when d(x) <= d(u) (Simchowitz and Foster, 2020), and in scenario (ii) (Lai, 1986). We conjecture that our upper bounds are also optimal in scenario (iii) (there is no known lower bound in this setting). Existing online algorithms proceed in epochs of (typically exponentially) growing durations. The control policy is fixed within each epoch, which considerably simplifies the analysis of the estimation error on A and B and hence of the regret. Our algorithm departs from this design choice: it is a simple variant of certainty-equivalence regulators, where the estimates of A and B and the resulting control policy can be updated as frequently as we wish, possibly at every step. Quantifying the impact of such a constantly-varying control policy on the performance of these estimates and on the regret constitutes one of the technical challenges tackled in this paper.
Year
Venue
DocType
2022
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151
Conference
Volume
ISSN
Citations 
151
2640-3498
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Yassir Jedra101.01
Alexandre Proutiere255840.94