Title
Policy Poisoning in Batch Reinforcement Learning and Control
Abstract
We study a security threat to batch reinforcement learning and control where the attacker aims to poison the learned policy. The victim is a reinforcement learner / controller which first estimates the dynamics and the rewards from a batch data set, and then solves for the optimal policy with respect to the estimates. The attacker can modify the data set slightly before learning happens, and wants to force the learner into learning a target policy chosen by the attacker. We present a unified framework for solving batch policy poisoning attacks, and instantiate the attack on two standard victims: tabular certainty equivalence learner in reinforcement learning and linear quadratic regulator in control. We show that both instantiation result in a convex optimization problem on which global optimality is guaranteed, and provide analysis on attack feasibility and attack cost. Experiments show the effectiveness of policy poisoning attacks.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
reinforcement learning,the attacker,the victim
Field
DocType
Volume
Computer science,Artificial intelligence,Machine learning,Reinforcement learning
Conference
32
ISSN
Citations 
PageRank 
1049-5258
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Yuzhe Ma153.51
Xuezhou Zhang214.41
Wen Sun32810.46
Xiaojin Zhu43586222.74