Title
On the Security Relevance of Initial Weights in Deep Neural Networks.
Abstract
Recently, a weight-based attack on stochastic gradient descent inducing overfitting has been proposed. We show that the threat is broader: A task-independent permutation on the initial weights suffices to limit the achieved accuracy to for example 50% on the Fashion MNIST dataset from initially more than 90%. These findings are supported on MNIST and CIFAR. We formally confirm that the attack succeeds with high likelihood and does not depend on the data. Empirically, weight statistics and loss appear unsuspicious, making it hard to detect the attack if the user is not aware. Our paper is thus a call for action to acknowledge the importance of the initial weights in deep learning.
Year
DOI
Venue
2020
10.1007/978-3-030-61609-0_1
ICANN (1)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Kathrin Grosse100.34
Thomas Alexander Trost200.34
Marius Mosbach344.79
Michael Backes42801163.28
dietrich klakow575698.76