Abstract | ||
---|---|---|
We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, ie,~ policies that keep the agent in desirable situations, both during training and at convergence. We formulate these problems as {\em constrained} Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction. |
Year | Venue | DocType |
---|---|---|
2020 | CoRL | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chow, Yinlam | 1 | 98 | 14.03 |
Ofir Nachum | 2 | 94 | 12.01 |
Aleksandra Faust | 3 | 68 | 14.83 |
Edgar Duenez-Guzman | 4 | 0 | 0.34 |
Ghavamzadeh, Mohammad | 5 | 5 | 0.74 |