Abstract | ||
---|---|---|
With the popularity of AIoT (Artificial Intelligence of Things) services, we can foresee that smart end devices will generate tremendous user data at the edge. In particular, it is critical to address how to properly distill knowledge from the edge network in a communication-efficient and privacy-preserving manner. Federated learning (FL), one of the promising machine learning frameworks, ensures data privacy by allowing end devices to collaboratively train a shared model without exposing raw data to an aggregation server. However, due to its distributed nature, the framework is vulnerable to two major threats: the Model Inversion Attacks and the Model Poisoning Attacks. An abnormal aggregator or malicious end devices may probably launch these attacks in the training phase. The former leaks sensitive information by reversing the model weights to users' raw data. Still, the latter can break the model security and mislead the global model to wrong inference results. Unfortunately, the existing research has not tackled such two-sided model attacks that occurred concurrently in FL. Therefore, in this paper, we propose a dual-masking federated learning (DMFL) framework that advocates partial weights uploading in the aggregation process and applies two kinds of masks on both the end device and the aggregator sides. Based on the benchmark data for image classification, our experimental results show that the proposed DMFL framework outperforms other baselines, confirming that it can successfully preserve weights privacy and protect model security for AIoT. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/GLOBECOM46510.2021.9685701 | 2021 IEEE Global Communications Conference (GLOBECOM) |
Keywords | DocType | ISSN |
Federated Learning,Model Poisoning Attacks,Model Inversion Attacks,Weights Privacy,Model Security,Dual-Masking Framework | Conference | 2334-0983 |
ISBN | Citations | PageRank |
978-1-7281-8105-9 | 1 | 0.35 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Te-Chuan Chiu | 1 | 1 | 0.35 |
Wei-Che Lin | 2 | 1 | 0.35 |
Ai-Chun Pang | 3 | 621 | 66.26 |
Li-Chen Cheng | 4 | 1 | 0.35 |