Title
Why Do Adversarial Attacks Transfer? Explaining Transferability Of Evasion And Poisoning Attacks
Abstract
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model. Empirical evidence for transferability has been shown in previous work, but the underlying reasons why an attack transfers or not are not yet well understood. In this paper, we present a comprehensive analysis aimed to investigate the transferability of both test-time evasion and training-time poisoning attacks. We provide a unifying optimization framework for evasion and poisoning attacks, and a formal definition of transferability of such attacks. We highlight two main factors contributing to attack transferability: the intrinsic adversarial vulnerability of the target model, and the complexity of the surrogate model used to optimize the attack. Based on these insights, we define three metrics that impact an attack's transferability. Interestingly, our results derived from theoretical analysis hold for both evasion and poisoning attacks, and are confirmed experimentally using a wide range of linear and non-linear classifiers and datasets.
Year
Venue
Field
2019
PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM
Empirical evidence,Surrogate model,Formal description,Artificial intelligence,Transferability,Mathematics,Instrumental and intrinsic value,Machine learning,Vulnerability,Adversarial system
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
23
8
Name
Order
Citations
PageRank
Ambra Demontis11089.25
Marco Melis213211.03
Maura Pintor314.07
Matthew Jagielski4475.62
Battista Biggio5122473.49
Alina Oprea6106756.47
Cristina Nita-Rotaru71855100.14
Fabio Roli84846311.69