Abstract | ||
---|---|---|
Building robust deep neural network (DNN) machine learning models in adversarial settings is a problem of great importance to communication and cyber security. We consider white-box attacks in which an adversary has full knowledge of the learning architecture, but the adversary's ability to manipulate is bounded in the L-p norm sense. Given that adversarial examples are generated via small perturbations to the input, we develop a scalable mathematical framework that leads to bounds on the effect of these input perturbations on the network output. We study several typical DNN components: linear transformations, ReLU, sigmoid and double ReLU units. We use the well-calibrated MNIST data for experimental validation, and present results and insights. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/MILCOM.2018.8599814 | 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018) |
Keywords | Field | DocType |
deep neural networks, machine learning, adversarial examples | MNIST database,Computer science,Computer network,Artificial intelligence,Linear map,Adversary,Artificial neural network,Adversarial system,Bounded function,Sigmoid function,Scalability | Conference |
ISSN | Citations | PageRank |
2155-7578 | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Todd P. Huster | 1 | 0 | 0.34 |
Cho-Yu Jason Chiang | 2 | 27 | 7.41 |
Ritu Chadha | 3 | 137 | 26.01 |
Swami, A. | 4 | 5105 | 566.62 |