Title
Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Abstract
We study a security threat to reinforcement learning where an attacker poisons the learning environment to force the agent into executing a target policy chosen by the attacker. As a victim, we consider RL agents whose objective is to find a policy that maximizes reward in infinite-horizon problem settings. The attacker can manipulate the rewards and the transition dynamics in the learning environment at training-time, and is interested in doing so in a stealthy manner. We propose an optimization framework for finding an optimal stealthy attack for different measures of attack cost. We provide lower/upper bounds on the attack cost, and instantiate our attacks in two settings: (i) an offline setting where the agent is doing planning in the poisoned environment, and (ii) an online setting where the agent is learning a policy with poisoned feedback. Our results show that the attacker can easily succeed in teaching any target policy to the victim under mild conditions and highlight a significant security threat to reinforcement learning agents in practice.
Year
DOI
Venue
2021
v22/20-1329.html
JOURNAL OF MACHINE LEARNING RESEARCH
Keywords
DocType
Volume
training-time adversarial attacks, reinforcement learning, policy teaching, environment poisoning, security threat
Journal
22
Issue
ISSN
Citations 
1
1532-4435
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Amin Rakhsha100.34
Goran Radanovic2527.48
Rati Devidze301.69
Xiaojin Zhu43586222.74
Adish Singla539733.45