Title
The Space of Transferable Adversarial Examples.
Abstract
Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. They often transfer: the same adversarial example fools more than one model. In this work, we propose novel methods for estimating the previously unknown dimensionality of the space of adversarial inputs. We find that adversarial examples span a contiguous subspace of large (~25) dimensionality. Adversarial subspaces with higher dimensionality are more likely to intersect. We find that for two different models, a significant fraction of their subspaces is shared, thus enabling transferability. In the first quantitative analysis of the similarity of different modelsu0027 decision boundaries, we show that these boundaries are actually close in arbitrary directions, whether adversarial or benign. We conclude by formally studying the limits of transferability. We derive (1) sufficient conditions on the data distribution that imply transferability for simple model classes and (2) examples of scenarios in which transfer does not occur. These findings indicate that it may be possible to design defenses against transfer-based attacks, even for models that are vulnerable to direct attacks.
Year
Venue
Field
2017
arXiv: Machine Learning
Subspace topology,Linear subspace,Curse of dimensionality,Artificial intelligence,Transferability,Mathematics,Machine learning,Adversarial system,The Intersect
DocType
Volume
Citations 
Journal
abs/1704.03453
42
PageRank 
References 
Authors
1.29
12
5
Name
Order
Citations
PageRank
Florian Tramèr146326.53
Nicolas Papernot2193287.62
Ian J. Goodfellow35224268.13
Dan Boneh4212541398.98
P. McDaniel57174494.57