Title
Extending Defensive Distillation.
Abstract
Machine learning is vulnerable to adversarial examples: inputs carefully modified to force misclassification. Designing defenses against such inputs remains largely an open problem. In this work, we revisit defensive distillation---which is one of the mechanisms proposed to mitigate adversarial examples---to address its limitations. We view our results not only as an effective way of addressing some of the recently discovered attacks but also as reinforcing the importance of improved training techniques.
Year
Venue
DocType
2017
CoRR
Journal
Volume
Citations 
PageRank 
abs/1705.05264
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Nicolas Papernot1193287.62
P. McDaniel27174494.57