Title | ||
---|---|---|
Cutting Your Losses: Learning Fault-Tolerant Control and Optimal Stopping under Adverse Risk. |
Abstract | ||
---|---|---|
Recently, there has been a surge in interest in safe and robust techniques within reinforcement learning (RL). Current notions of risk in RL fail to capture the potential for systemic failures such as abrupt stoppages from system failures or surpassing of safety thresholds and the appropriate responsive controls in such instances. We propose a novel approach to risk minimisation within RL in which, in addition to taking actions that maximise its expected return, the controller learns a policy that is robust against stoppages due to an adverse event such as an abrupt failure. The results of the paper cover fault-tolerant control in worst-case scenarios under random stopping and optimal stopping, all in unknown environments. By demonstrating that the class of problems is represented by a variant of stochastic games, we prove the existence of a solution which is a unique fixed point equilibrium of the game and characterise the optimal controller behaviour. We then introduce a value function approximation algorithm that converges to the solution through simulation in unknown environments. |
Year | Venue | DocType |
---|---|---|
2019 | arXiv: Systems and Control | Journal |
Volume | Citations | PageRank |
abs/1902.05045 | 0 | 0.34 |
References | Authors | |
7 | 1 |
Name | Order | Citations | PageRank |
---|---|---|---|
David Mguni | 1 | 1 | 2.71 |