Title
AI Safety Gridworlds.
Abstract
We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.
Year
Venue
DocType
2017
CoRR
Journal
Volume
Citations 
PageRank 
abs/1711.09883
0
0.34
References 
Authors
0
8
Name
Order
Citations
PageRank
Jan Leike115015.49
Martic, Miljan2573.80
Victoria Krakovna3121.97
Pedro A Ortega412517.96
tom everitt5418.12
Andrew Lefrancq601.01
Laurent Orseau714418.23
Shane Legg839535.60