Title
A Secure and Efficient Federated Learning Framework for NLP.
Abstract
In this work, we consider the problem of designing secure and efficient federated learning (FL) frameworks. Existing solutions either involve a trusted aggregator or require heavyweight cryptographic primitives, which degrades performance significantly. Moreover, many existing secure FL designs work only under the restrictive assumption that none of the clients can be dropped out from the training protocol. To tackle these problems, we propose SEFL, a secure and efficient FL framework that (1) eliminates the need for the trusted entities; (2) achieves similar and even better model accuracy compared with existing FL designs; (3) is resilient to client dropouts. Through extensive experimental studies on natural language processing (NLP) tasks, we demonstrate that the SEFL achieves comparable accuracy compared to existing FL solutions, and the proposed pruning technique can improve runtime performance up to 13.7x.
Year
Venue
DocType
2021
EMNLP
Conference
Volume
Citations 
PageRank 
2021.emnlp-main
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
Chenghong Wang1549.71
Jieren Deng200.68
Xianrui Meng300.68
Yijue Wang401.01
Ji Li59710.87
Sheng Lin613914.39
Shuo Han700.34
Fei Miao800.34
Sanguthevar Rajasekaran91508190.34
Caiwen Ding1014226.52