Title
Towards Sparsified Federated Neuroimaging Models via Weight Pruning.
Abstract
Federated training of large deep neural networks can often be restrictive due to the increasing costs of communicating the updates with increasing model sizes. Various model pruning techniques have been designed in centralized settings to reduce inference times. Combining centralized pruning techniques with federated training seems intuitive for reducing communication costs -- by pruning the model parameters right before the communication step. Moreover, such a progressive model pruning approach during training can also reduce training times/costs. To this end, we propose FedSparsify, which performs model pruning during federated training. In our experiments in centralized and federated settings on the brain age prediction task (estimating a person's age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated learning environments with highly heterogeneous data distributions. One surprising benefit of model pruning is improved model privacy. We demonstrate that models with high sparsity are less susceptible to membership inference attacks, a type of privacy attack.
Year
DOI
Venue
2022
10.1007/978-3-031-18523-6_14
International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Dimitris Stripelis100.34
Umang Gupta251.43
Nikhil Dhinagar300.34
Greg Ver Steeg424332.99
Paul Thompson5298.20
José Luis Ambite600.34