Title
Trust Region Based Adversarial Attack On Neural Networks
Abstract
Deep Neural Networks are quite vulnerable to adversarial perturbations. Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently. We propose several attacks based on variants of the trust region optimization method. We test the proposed methods on Cifar-10 and ImageNet datasets using several different models including AlexNet, ResNet-50, VGG-16, and DenseNet-121 models. Our methods achieve comparable results with the Carlini-Wagner (CW) attack, but with significant speed up of up to 37x, for the VGG-16 model on a Titan Xp GPU. For the case of ResNet-50 on ImageNet, we can bring down its classification accuracy to less than 0.1% with at most 1.5% relative L-infinity (or L-2) perturbation requiring only 1.02 seconds as compared to 27.04 seconds for the CW attack. We have open sourced our method which can be accessed at [1].
Year
DOI
Venue
2018
10.1109/CVPR.2019.01161
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Trust region,Artificial intelligence,Artificial neural network,Mathematics,Deep neural networks,Machine learning,Speedup,Adversarial system
Journal
abs/1812.06371
ISSN
Citations 
PageRank 
1063-6919
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Zhewei Yao13110.58
Amir Gholami26612.99
Peng Xu3203.44
Kurt Keutzer45040801.67
Michael W. Mahoney53297218.10