Title
Policy Gradients with Parameter-Based Exploration for Control
Abstract
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than those obtained by policy gradient methods such as REINFORCE. For several complex control tasks, including robust standing with a humanoid robot, we show that our method outperforms well-known algorithms from the fields of policy gradients, finite difference methods and population based heuristics. We also provide a detailed analysis of the differences between our method and the other algorithms.
Year
DOI
Venue
2008
10.1007/978-3-540-87536-9_40
ICANN (1)
Keywords
Field
DocType
detailed analysis,complex control task,policy gradient method,likelihood gradient,policy gradients,policy gradient,humanoid robot,observable markov decision problem,model-free reinforcement,variance gradient estimate,finite difference method,parameter-based exploration,reinforcement learning,parameter space,gradient method
Population,Decision problem,Mathematical optimization,Computer science,Markov chain,Heuristics,Artificial intelligence,Sampling (statistics),Finite difference method,Machine learning,Reinforcement learning,Humanoid robot
Conference
Volume
ISSN
Citations 
5163
0302-9743
22
PageRank 
References 
Authors
1.13
9
6
Name
Order
Citations
PageRank
Frank Sehnke152739.18
Christian Osendorfer212513.24
Thomas Rückstieß311220.66
Graves, Alex48572405.10
Jan Peters53553264.28
Jürgen Schmidhuber6178361238.63