Title
Emergence of Locomotion Behaviours in Rich Environments.
Abstract
The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. In practice, however, it is common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Specifically, we train agents in diverse environmental contexts, and find that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion -- behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed following this https URL .
Year
Venue
Field
2017
arXiv: Artificial Intelligence
Suite,Computer science,Depiction,Artificial intelligence,Jump,Machine learning,Scalability,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1707.02286
52
PageRank 
References 
Authors
1.67
11
12
Name
Order
Citations
PageRank
Nicolas Heess1176294.77
Dhruva TB2712.68
Sriram Srinivasan337927.92
Jay Lemmon4692.32
Josh S. Merel514311.34
Greg Wayne659231.86
Yuval Tassa7109752.33
Tom Erez8102750.56
Ziyu Wang937223.71
s m ali eslami101198.58
Martin Riedmiller115655366.29
David Silver128252363.86