Title
Black Box Attacks on Deep Anomaly Detectors
Abstract
The process of identifying the true anomalies from a given set of data instances is known as anomaly detection. It has been applied to address a diverse set of problems in multiple application domains including cybersecurity. Deep learning has recently demonstrated state-of-the-art performance on key anomaly detection applications, such as intrusion detection, Denial of Service (DoS) attack detection, security log analysis, and malware detection. Despite the great successes achieved by neural network architectures, models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. Recent approaches in the literature have focused on three different areas: (a) generating adversarial examples in supervised machine learning in multiple domains; (b) countering the attacks with various defenses; (c) theoretical guarantees on the robustness of machine learning models by understanding their security properties. However, they have not covered, from the perspective of the anomaly detection task in a black box setting. The exploration of black box attack strategies, which reduce the number of queries for finding adversarial examples with high probability, is an important problem. In this paper, we study the security of black box deep anomaly detectors with a realistic threat model. We propose a novel black box attack in query constraint settings. First, we run manifold approximation on samples collected at attacker end for query reduction and understanding various thresholds set by underlying anomaly detector, and use spherical adversarial subspaces to generate attack samples. This method is well suited for attacking anomaly detectors where decision boundaries of nominal and abnormal classes are not very well defined and decision process is done with a set of thresholds on anomaly scores. We validate our attack on state-of-the-art deep anomaly detectors and show that the attacker goal is achieved under constraint settings. Our evaluation of the proposed approach shows promising results and demonstrates that our strategy can be successfully used against other anomaly detectors.
Year
DOI
Venue
2019
10.1145/3339252.3339266
Proceedings of the 14th International Conference on Availability, Reliability and Security
Keywords
Field
DocType
Anomaly detection, Black box attacks, Neural networks
Black box (phreaking),Data mining,Anomaly detection,Denial-of-service attack,Threat model,Computer science,Robustness (computer science),Artificial intelligence,Deep learning,Artificial neural network,Intrusion detection system
Conference
ISBN
Citations 
PageRank 
978-1-4503-7164-3
2
0.38
References 
Authors
0
4
Name
Order
Citations
PageRank
Aditya Kuppa121.74
Slawomir Grzonkowski220.38
Muhammad Rizwan Asghar312123.64
Nhien-An Le-Khac422449.63