Title
Formal Verification for Safe Deep Reinforcement Learning in Trajectory Generation
Abstract
We consider the problem of Safe Deep Reinforcement Learning (DRL) using formal verification in a trajectory generation task. In more detail, we propose an approach to verify whether a trained model can generate trajectories that are guaranteed to meet safety properties (e.g., operate in a limited work-space). We show that our verification approach based on interval analysis, provably guarantees whether a model meets pre-specified safety properties and it returns the input values that cause a violation of such properties. Furthermore, we show that an optimized DRL approach (i.e., using scaling discount factor and a mixed exploration policy based on a directional controller) can reach the target with millimeter precision while reducing the set of inputs that cause safety violations. Crucially, in our experiments, the number of undesirable inputs is so low that they can be directly removed with a simple controller.
Year
DOI
Venue
2020
10.1109/IRC.2020.00062
2020 Fourth IEEE International Conference on Robotic Computing (IRC)
Keywords
DocType
ISBN
Deep-Reinforcement-Learning,Neural-Network-Verification,Robotics
Conference
978-1-7281-5238-7
Citations 
PageRank 
References 
0
0.34
3
Authors
4
Name
Order
Citations
PageRank
Davide Corsi112.05
Enrico Marchesini202.03
Alessandro Farinelli366774.16
Paolo Fiorini41068134.11