Title
Subpopulation Data Poisoning Attacks
Abstract
ABSTRACTMachine learning systems are deployed in critical settings, but they might fail in unexpected ways, impacting the accuracy of their predictions. Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed. In this work, we introduce a novel data poisoning attack called a subpopulation attack, which is particularly relevant when datasets are large and diverse. We design a modular framework for subpopulation attacks, instantiate it with different building blocks, and show that the attacks are effective for a variety of datasets and machine learning models. We further optimize the attacks in continuous domains using influence functions and gradient optimization methods. Compared to existing backdoor poisoning attacks, subpopulation attacks have the advantage of inducing misclassification in naturally distributed data points at inference time, making the attacks extremely stealthy. We also show that our attack strategy can be used to improve upon existing targeted attacks. We prove that, under some assumptions, subpopulation attacks are impossible to defend against, and empirically demonstrate the limitations of existing defenses against our attacks, highlighting the difficulty of protecting machine learning against this threat.
Year
DOI
Venue
2021
10.1145/3460120.3485368
Computer and Communications Security
Keywords
DocType
Citations 
Adversarial Machine Learning, Poisoning Attacks, Fairness
Conference
2
PageRank 
References 
Authors
0.39
16
4
Name
Order
Citations
PageRank
Matthew Jagielski1475.62
Giorgio Severi220.39
Niklas Pousette Harger320.39
Alina Oprea4106756.47