Title
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
Abstract
This work aims to tackle Model Inversion (MI) attack on Split Federated Learning (SFL). SFL is a recent distributed training scheme where multiple clients send intermediate activations (i. e., feature map), instead of raw data, to a central server. While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server. Existing works on protecting SFL only consider inference and do not handle attacks during training. So we propose ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training. It is based on deriving a resistant feature extractor via attacker-aware training, and using this extractor to initialize the client-side model prior to standard SFL training. Such a method helps in reducing the computational complexity due to use of strong inversion model in client-side adversarial training as well as vulnerability of attacks launched in early training epochs. On CIFAR-100 dataset, our proposed framework successfully mitigates MI attack on a VGG-11 model with a high reconstruction Mean-Square-Error of 0.050 compared to 0.005 obtained by the baseline system. The frame-work achieves 67.5% accuracy (only 1 % accuracy drop) with very low computation overhead. Code is released at: https://github.com/zlijingtao/ResSFL.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00995
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Privacy and federated learning, Efficient learning and inferences, Transfer/low-shot/long-tail learning, Transparency,fairness,accountability,privacy and ethics in vision
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Jingtao Li134.15
Adnan Siraj Rakin2307.89
Xing Chen396.98
Zhezhi He413625.37
Deliang Fan537553.66
Chaitali Chakrabarti61978184.17