Title
Practical Reinforcement Learning in Continuous Spaces
Abstract
Dynamic control tasks are good candidates for the application of reinforcement learn- ing techniques. However, many of these tasks inherently have continuous state or ac- tion variables. This can cause problems for traditional reinforcement learning algorithms which assume discrete states and actions. In this paper, we introduce an algorithm that safely approximates the value function for continuous state control tasks, and that learns quickly from a small amount of data. We give experimental results using this algo- rithm to learn policies for both a simulated task and also for a real robot, operating in an unaltered environment. The algorithm works well in a traditional learning setting, and demonstrates extremely good learning when bootstrapped with a small amount of human-provided data.
Year
Venue
Keywords
2000
ICML
practical reinforcement learning,continuous spaces,value function,reinforcement learning
Field
DocType
ISBN
Computer science,Bootstrapping,Bellman equation,Extremely good,Artificial intelligence,Robot,Error-driven learning,Machine learning,Reinforcement learning
Conference
1-55860-707-2
Citations 
PageRank 
References 
81
6.97
11
Authors
2
Name
Order
Citations
PageRank
William D. Smart122626.50
Leslie Pack Kaelbling25930854.90