Title
Reinforcement learning in multidimensional continuous action spaces
Abstract
The majority of learning algorithms available today focus on approximating the state (V ) or state-action (Q) value function and efficient action selection comes as an afterthought. On the other hand, real-world problems tend to have large action spaces, where evaluating every possible action becomes impractical. This mismatch presents a major obstacle in successfully applying reinforcement learning to real-world problems. In this paper we present an effective approach to learning and acting in domains with multidimensional and/or continuous control variables where efficient action selection is embedded in the learning process. Instead of learning and representing the state or state-action value function of the MDP, we learn a value function over an implied augmented MDP, where states represent collections of actions in the original MDP and transitions represent choices eliminating parts of the action space at each step. Action selection in the original MDP is reduced to a binary search by the agent in the transformed MDP, with computational complexity logarithmic in the number of actions, or equivalently linear in the number of action dimensions. Our method can be combined with any discrete-action reinforcement learning algorithm for learning multidimensional continuous-action policies using a state value approximator in the transformed MDP. Our preliminary results with two well-known reinforcement learning algorithms (Least-Squares Policy Iteration and Fitted Q-Iteration) on two continuous action domains (1-dimensional inverted pendulum regulator, 2-dimensional bicycle balancing) demonstrate the viability and the potential of the proposed approach.
Year
DOI
Venue
2011
10.1109/ADPRL.2011.5967381
Adaptive Dynamic Programming And Reinforcement Learning
Keywords
Field
DocType
Markov processes,iterative methods,learning (artificial intelligence),1D inverted pendulum regulator,2D bicycle balancing,Markov decision process,discrete-action reinforcement learning,fitted Q-iteration,least-squares policy iteration,multidimensional continuous action spaces,state value approximator,state value function,state-action value function
Approximation algorithm,Mathematical optimization,Q-learning,Markov decision process,Bellman equation,Binary search algorithm,Action selection,Mathematics,Reinforcement learning,Computational complexity theory
Conference
ISBN
Citations 
PageRank 
978-1-4244-9887-1
13
0.77
References 
Authors
11
2
Name
Order
Citations
PageRank
Jason Pazis11046.97
Michail G. Lagoudakis2877.19