Abstract | ||
---|---|---|
We study a security threat to reinforcement learning where an attacker poisons the learning environment to force the agent into executing a target policy chosen by the attacker. As a victim, we consider RL agents whose objective is to find a policy that maximizes reward in infinite-horizon problem settings. The attacker can manipulate the rewards and the transition dynamics in the learning environment at training-time, and is interested in doing so in a stealthy manner. We propose an optimization framework for finding an optimal stealthy attack for different measures of attack cost. We provide lower/upper bounds on the attack cost, and instantiate our attacks in two settings: (i) an offline setting where the agent is doing planning in the poisoned environment, and (ii) an online setting where the agent is learning a policy with poisoned feedback. Our results show that the attacker can easily succeed in teaching any target policy to the victim under mild conditions and highlight a significant security threat to reinforcement learning agents in practice. |
Year | DOI | Venue |
---|---|---|
2021 | v22/20-1329.html | JOURNAL OF MACHINE LEARNING RESEARCH |
Keywords | DocType | Volume |
training-time adversarial attacks, reinforcement learning, policy teaching, environment poisoning, security threat | Journal | 22 |
Issue | ISSN | Citations |
1 | 1532-4435 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Amin Rakhsha | 1 | 0 | 0.34 |
Goran Radanovic | 2 | 52 | 7.48 |
Rati Devidze | 3 | 0 | 1.69 |
Xiaojin Zhu | 4 | 3586 | 222.74 |
Adish Singla | 5 | 397 | 33.45 |