Abstract | ||
---|---|---|
Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/ICTAI50040.2020.00044 | 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI) |
Keywords | DocType | ISSN |
Federated learning,Security,Privacy,Fairness | Conference | 1082-3409 |
ISBN | Citations | PageRank |
978-1-7281-8536-1 | 0 | 0.34 |
References | Authors | |
5 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ashneet Khandpur Singh | 1 | 0 | 0.34 |
Alberto Blanco-Justicia | 2 | 14 | 6.77 |
Josep Domingo | 3 | 67 | 12.25 |
David Sánchez | 4 | 690 | 33.01 |
David Rebollo-Monedero | 5 | 801 | 49.00 |