Title
A taxonomy and survey of attacks against machine learning
Abstract
The majority of machine learning methodologies operate with the assumption that their environment is benign. However, this assumption does not always hold, as it is often advantageous to adversaries to maliciously modify the training (poisoning attacks) or test data (evasion attacks). Such attacks can be catastrophic given the growth and the penetration of machine learning applications in society. Therefore, there is a need to secure machine learning enabling the safe adoption of it in adversarial cases, such as spam filtering, malware detection, and biometric recognition. This paper presents a taxonomy and survey of attacks against systems that use machine learning. It organizes the body of knowledge in adversarial machine learning so as to identify the aspects where researchers from different fields can contribute to. The taxonomy identifies attacks which share key characteristics and as such can potentially be addressed by the same defence approaches. Thus, the proposed taxonomy makes it easier to understand the existing attack landscape towards developing defence mechanisms, which are not investigated in this survey. The taxonomy is also leveraged to identify open problems that can lead to new research areas within the field of adversarial machine learning.
Year
DOI
Venue
2019
10.1016/j.cosrev.2019.100199
Computer Science Review
Keywords
Field
DocType
Machine learning,Attacks,Taxonomy,Survey
Body of knowledge,Computer science,Adversarial machine learning,Artificial intelligence,Test data,Biometrics,Malware,Machine learning,Adversarial system
Journal
Volume
ISSN
Citations 
34
1574-0137
9
PageRank 
References 
Authors
0.63
0
5