Abstract | ||
---|---|---|
In this paper, we present a model-based reinforcement learning system where the transition model is treated in a Bayesian manner. The approach naturally lends itself to exploit expert knowledge by introducing priors to impose structure on the underlying learning task. The additional information introduced to the system means that we can learn from small amounts of data, recover an interpretable model and, importantly, provide predictions with an associated uncertainty. To show the benefits of the approach, we use a challenging data set where the dynamics of the underlying system exhibit both operational phase shifts and heteroscedastic noise. Comparing our model to NFQ and BNN+LV, we show how our approach yields human-interpretable insight about the underlying dynamics while also increasing data-efficiency. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1016/j.neucom.2019.12.132 | Neurocomputing |
Keywords | DocType | Volume |
Bayesian machine learning,Gaussian processes,Hierarchical gaussian processes,Reinforcement learning,Model-based reinforcement learning,Stochastic policy search,Data-efficiency | Journal | 416 |
ISSN | Citations | PageRank |
0925-2312 | 1 | 0.39 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kaiser, Markus | 1 | 1 | 1.40 |
Clemens Otte | 2 | 5 | 4.53 |
Thomas A. Runkler | 3 | 345 | 47.43 |
carl henrik ek | 4 | 327 | 30.76 |