Title
Exploring Parameter Space in Reinforcement Learning
Abstract
This paper discusses parameter-based exploration methods for reinforcement learning. Parameter-based methods perturb parameters of a general function approximator directly, rather than adding noise to the resulting actions. Parameter-based exploration unifies reinforcement learning and black-box optimization, and has several advantages over action perturbation. We review two recent parameter-exploring algorithms: Natural Evolution Strategies and Policy Gradients with Parameter-Based Exploration. Both outperform state-of-the-art algorithms in several complex high-dimensional tasks commonly found in robot control. Furthermore, we describe how a novel exploration method, State-Dependent Exploration, can modify existing algorithms to mimic exploration in parameter space.
Year
DOI
Venue
2010
10.2478/s13230-010-0002-4
Paladyn
Keywords
Field
DocType
reinforcement learningoptimizationexplorationpolicy gradients,parameter space,evolution strategy,generating function,reinforcement learning,robot control
Robot control,Computer science,Simulation,Artificial intelligence,Parameter space,Machine learning,Reinforcement learning
Journal
Volume
Issue
ISSN
1
1
2080-9778
Citations 
PageRank 
References 
24
1.17
19
Authors
6
Name
Order
Citations
PageRank
Thomas Rckstieß1241.17
Frank Sehnke252739.18
Tom Schaul391679.40
Daan Wierstra45412255.92
Yi Sun57410.99
Jürgen Schmidhuber6178361238.63