Title
On Applications of Bootstrap in Continuous Space Reinforcement Learning.
Abstract
In decision making problems for continuous state and action spaces, linear dynamical models are widely employed. Specifically, policies for stochastic linear systems subject to quadratic cost functions capture a large number of applications in reinforcement learning. Selected randomized policies have been studied in the literature recently that address the trade-off between identification and control. However, little is known about policies based on bootstrapping observed states and actions. In this work, we show that bootstrap-based policies achieve a square root scaling of regret with respect to time. We also obtain results on the accuracy of learning the model's dynamics. Corroborative numerical analysis that illustrates the technical results is also provided.
Year
DOI
Venue
2019
10.1109/CDC40024.2019.9029975
CDC
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Mohamad Kazem Shirani Faradonbeh1245.96
Ambuj Tewari2137199.22
George Michailidis304.73