Title
Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions.
Abstract
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks. To this end, many defense approaches that attempt to improve the robustness of DNNs have been proposed. In a separate and yet related area, recent works have explored to quantize neural network weights and activation functions into low bit-width to compress model size and reduce computational complexity. In this work,we find that these two different tracks, namely the pursuit of network compactness and robustness, can bemerged into one and give rise to networks of both advantages. To the best of our knowledge, this is the first work that uses quantization of activation functions to defend against adversarial examples. We also propose to train robust neural networks by using adaptive quantization techniques for the activation functions. Our proposed Dynamic Quantized Activation (DQA) is verified through a wide range of experiments with the MNIST and CIFAR-10 datasets under different white-box attack methods, including FGSM, PGD, andCu0026W attacks. Furthermore, Zeroth Order Optimization and substitute model based black-box attacks are also considered in this work. The experimental results clearly show that the robustness of DNNs could be greatly improved using the proposed DQA.
Year
Venue
Field
2018
arXiv: Learning
MNIST database,Zeroth law of thermodynamics,Compact space,Robustness (computer science),Artificial intelligence,Quantization (physics),Quantization (signal processing),Artificial neural network,Machine learning,Mathematics,Computational complexity theory
DocType
Volume
Citations 
Journal
abs/1807.06714
3
PageRank 
References 
Authors
0.37
0
4
Name
Order
Citations
PageRank
Adnan Siraj Rakin1307.89
Jinfeng Yi248533.71
Boqing Gong368533.29
Deliang Fan437553.66