Title
Safe Exploration in Continuous Action Spaces.
Abstract
We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated. We show how to exploit the typically smooth dynamics of these systems and enable RL algorithms to never violate constraints during learning. Our technique is to directly add to the policy a safety layer that analytically solves an action correction formulation per each state. The novelty of obtaining an elegant closed-form solution is attained due to a linearized model, learned on past trajectories consisting of arbitrary actions. This is to mimic the real-world circumstances where data logs were generated with a behavior policy that is implausible to describe mathematically; such cases render the known safety-aware off-policy methods inapplicable. We demonstrate the efficacy of our approach on new representative physics-based environments, and prevail where reward shaping fails by maintaining zero constraint violations.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Computer science,Physical system,Exploit,Artificial intelligence,Novelty,Robot,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1801.08757
4
PageRank 
References 
Authors
0.42
8
6
Name
Order
Citations
PageRank
Gal Dalal1145.16
Krishnamurthy Dvijotham218726.90
Matej Vecerik31395.10
Todd Hester433031.53
Cosmin Paduraru541.09
Yuval Tassa6109752.33