Title
Parameter-exploring policy gradients.
Abstract
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step.
Year
DOI
Venue
2010
10.1016/j.neunet.2009.12.004
Neural Networks
Keywords
DocType
Volume
Policy gradients,Stochastic optimisation,Reinforcement learning,Robotics,Control
Journal
23
Issue
ISSN
Citations 
4
0893-6080
33
PageRank 
References 
Authors
1.81
8
6
Name
Order
Citations
PageRank
Frank Sehnke152739.18
Christian Osendorfer212513.24
Thomas Rückstieß311220.66
Graves, Alex48572405.10
Jan Peters53553264.28
Jürgen Schmidhuber6178361238.63