Title
Online Reinforcement Learning for Real-Time Exploration in Continuous State and Action Markov Decision Processes.
Abstract
This paper presents a new method to learn online policies in continuous state, continuous action, model-free Markov decision processes, with two properties that are crucial for practical applications. First, the policies are implementable with a very low computational cost: once the policy is computed, the action corresponding to a given state is obtained in logarithmic time with respect to the number of samples used. Second, our method is versatile: it does not rely on any a priori knowledge of the structure of optimal policies. We build upon the Fitted Q-iteration algorithm which represents the $Q$-value as the average of several regression trees. Our algorithm, the Fitted Policy Forest algorithm (FPF), computes a regression forest representing the Q-value and transforms it into a single tree representing the policy, while keeping control on the size of the policy using resampling and leaf merging. We introduce an adaptation of Multi-Resolution Exploration (MRE) which is particularly suited to FPF. We assess the performance of FPF on three classical benchmarks for reinforcement learning: the Inverted Pendulum, the Double Integrator and Car on the Hill and show that FPF equals or outperforms other algorithms, although these algorithms rely on the use of particular representations of the policies, especially chosen in order to fit each of the three problems. Finally, we exhibit that the combination of FPF and MRE allows to find nearly optimal solutions in problems where $epsilon$-greedy approaches would fail.
Year
Venue
Field
2016
arXiv: Artificial Intelligence
Inverted pendulum,Regression,Double integrator,Computer science,A priori and a posteriori,Markov decision process,Artificial intelligence,Logarithm,Resampling,Machine learning,Reinforcement learning
DocType
Volume
ISSN
Journal
abs/1612.03780
ICAPS 26th, PlanRob 4th (Workshop) (2016) 37-48
Citations 
PageRank 
References 
0
0.34
0
Authors
2
Name
Order
Citations
PageRank
Ludovic Hofer101.69
Gimbert, H.253.21