Title
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
Abstract
While additional training data improves the robustness of deep neural networks against adversarial examples, it presents the challenge of curating a large number of specific real-world samples. We circumvent this challenge by using additional data from proxy distributions learned by advanced generative models. We first seek to formally understand the transfer of robustness from classifiers trained on proxy distributions to the real data distribution. We prove that the difference between the robustness of a classifier on the two distributions is upper bounded by the conditional Wasserstein distance between them. Next we use proxy distributions to significantly improve the performance of adversarial training on five different datasets. For example, we improve robust accuracy by up to $7.5$% and $6.7$% in $\ell_{\infty}$ and $\ell_2$ threat model over baselines that are not using proxy distributions on the CIFAR-10 dataset. We also improve certified robust accuracy by $7.6$% on the CIFAR-10 dataset. We further demonstrate that different generative models brings a disparate improvement in the performance in robust training. We propose a robust discrimination approach to characterize the impact and further provide a deeper understanding of why diffusion-based generative models are a better choice for proxy distribution than generative adversarial networks.
Year
Venue
Keywords
2022
International Conference on Learning Representations (ICLR)
adversarial robustness,certified adversarial robustness,adversarial attacks,generative models,proxy distribution
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Vikash Sehwag1175.32
Saeed Mahloujifar200.34
Tinashe Handina300.68
Sihui Dai400.68
Chong Xiang501.35
Mung Chiang67303486.32
Prateek Mittal700.34