Title
Robust Sparse Regularization: Defending Adversarial Attacks Via Regularized Sparse Network
Abstract
Deep Neural Network (DNN) trained by the gradient descent method is known to be vulnerable to maliciously perturbed adversarial input, aka. adversarial attack. As one of the countermeasures against adversarial attacks, increasing the model capacity for DNN robustness enhancement was discussed and reported as an effective approach by many recent works. In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack. For obtaining a simultaneously robust and compact DNN model, we propose a multi-objective training method called Robust Sparse Regularization (RSR), through the fusion of various regularization techniques, including channel-wise noise injection, lasso weight penalty, and adversarial training. We conduct extensive experiments to show the effectiveness of RSR against popular white-box (i.e., PGD and FGSM) and black-box attacks. Thanks to RSR, 85 % weight connections of ResNet-18 can be pruned while still achieving 0.68 % and 8.72 % improvement in clean- and perturbed-data accuracy respectively on CIFAR-10 dataset, in comparison to its PGD adversarial training baseline.
Year
DOI
Venue
2020
10.1145/3386263.3407651
GLSVLSI '20: Great Lakes Symposium on VLSI 2020 Virtual Event China September, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7944-1
0
PageRank 
References 
Authors
0.34
10
6
Name
Order
Citations
PageRank
Adnan Siraj Rakin121.80
Zhezhi He213625.37
Li Yang300.68
Yanzhi Wang471.51
Liqiang Wang570356.71
Deliang Fan637553.66