Title | ||
---|---|---|
Differentially private self-normalizing neural networks for adversarial robustness in federated learning |
Abstract | ||
---|---|---|
The need for robust, secure and private machine learning is an important goal for realizing the full potential of the Internet of Things (IoT). Federated Learning has proven to help protect against privacy violations and information leakage. However, it introduces new risk vectors which make Machine Learning models more difficult to defend against adversarial samples. We consider the problem of improving the resilience of Federated Learning to adversarial samples without compromising the privacy preserving features which Federated Learning was primarily designed to provide.Common techniques such as adversarial training, in their basic form are not well suited for a Federated Learning environment. Our study shows that adversarial training, while improving adversarial robustness, comes at a cost of reducing the privacy guarantee of Federated Learning.We introduce DiPSeN, a Differentially Private Self-normalizing Neural Network which combines elements of differential privacy, self-normalization, and a novel optimization algorithm for adversarial client selection. Our empirical results on publicly available datasets for intrusion detection and image classification show that DiPSeN successfully improves both adversarial robustness in Federated Learning, while maintaining the privacy preserving characteristics of Federated Learning.(c) 2022 Elsevier Ltd. All rights reserved. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1016/j.cose.2022.102631 | COMPUTERS & SECURITY |
Keywords | DocType | Volume |
Federated learning, Adversarial samples, Self-normalizing neural networks (SNN) | Journal | 116 |
ISSN | Citations | PageRank |
0167-4048 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Olakunle Ibitoye | 1 | 0 | 0.34 |
M. Omair Shafiq | 2 | 0 | 0.34 |
Ashraf Matrawy | 3 | 146 | 26.98 |