Title
Limitations of the Lipschitz constant as a defense against adversarial examples.
Abstract
Several recent papers have discussed utilizing Lipschitz constants to limit the susceptibility of neural networks to adversarial examples. We analyze recently proposed methods for computing the Lipschitz constant. We show that the Lipschitz constant may indeed enable adversarially robust neural networks. However, the methods currently employed for computing it suffer from theoretical and practical limitations. We argue that addressing this shortcoming is a promising direction for future research into certified adversarial defenses.
Year
Venue
DocType
2018
Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML
Conference
Volume
Citations 
PageRank 
abs/1807.09705
2
0.36
References 
Authors
11
3
Name
Order
Citations
PageRank
Todd Huster120.36
Cho-Yu Jason Chiang2277.41
Ritu Chadha313726.01