Title
Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data
Abstract
Machine learning is powerful to model massive genomic data while genome privacy is a growing concern. Studies have shown that not only the raw data but also the trained model can potentially infringe genome privacy. An example is the membership inference attack (MIA), by which the adversary can determine whether a specific record was included in the training dataset of the target model. Differential privacy (DP) has been used to defend against MIA with rigorous privacy guarantee by perturbing model weights. In this paper, we investigate the vulnerability of machine learning against MIA on genomic data, and evaluate the effectiveness of using DP as a defense mechanism. We consider two widely-used machine learning models, namely Lasso and convolutional neural network (CNN), as the target models. We study the trade-off between the defense power against MIA and the prediction accuracy of the target model under various privacy settings of DP. Our results show that the relationship between the privacy budget and target model accuracy can be modeled as a log-like curve, thus a smaller privacy budget provides stronger privacy guarantee with the cost of losing more model accuracy. We also investigate the effect of model sparsity on model vulnerability against MIA. Our results demonstrate that in addition to prevent overfitting, model sparsity can work together with DP to significantly mitigate the risk of MIA.
Year
DOI
Venue
2021
10.1142/9789811232701_0003
PACIFIC SYMPOSIUM ON BICOMPUTING 2021
Keywords
DocType
Citations 
Differential privacy, Membership inference attack, Machine learning, Genomics
Conference
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Junjie Chen100.34
Wendy Hui Wang200.34
Xinghua Shi331.78