Title
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight.
Abstract
Deep reinforcement learning has shown promising results in learning control policies for complex sequential decision-making tasks. However, these neural network-based policies are known to be vulnerable to adversarial examples. This vulnerability poses a potentially serious threat to safety-critical systems such as autonomous vehicles. In this paper, we propose a defense mechanism to defend reinforcement learning agents from adversarial attacks by leveraging an action-conditioned frame prediction module. Our core idea is that the adversarial examples targeting at a neural network-based policy are not effective for the frame prediction model. By comparing the action distribution produced by a policy from processing the current observed frame to the action distribution produced by the same policy from processing the predicted frame from the action-conditioned frame prediction module, we can detect the presence of adversarial examples. Beyond detecting the presence of adversarial examples, our method allows the agent to continue performing the task using the predicted frame when the agent is under attack. We evaluate the performance of our algorithm using five games in Atari 2600. Our results demonstrate that the proposed defense mechanism achieves favorable performance against baseline algorithms in detecting adversarial examples and in earning rewards when the agents are under attack.
Year
Venue
Field
2017
arXiv: Computer Vision and Pattern Recognition
Computer science,Futures studies,Artificial intelligence,Artificial neural network,Machine learning,Vulnerability,Adversarial system,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1710.00814
6
PageRank 
References 
Authors
0.47
32
4
Name
Order
Citations
PageRank
Yen-Chen Lin1363.92
Ming-Yu Liu287235.44
Min Sun3108359.15
Jia-Bin Huang492042.90