Title
The Unintended Consequences of Overfitting: Training Data Inference Attacks.
Abstract
Machine learning algorithms that are applied to sensitive data pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through their structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about training data from machine learning models, either through training set membership inference or model inversion attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference, and when certain conditions on the influence of certain features are present, model inversion attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks, and begins to shed light on what other factors may be in play. Finally, we explore the connection between two types of attack, membership inference and model inversion, and show that there are deep connections between the two that lead to effective new attacks.
Year
Venue
Field
2017
arXiv: Cryptography and Security
Training set,Model inversion,Unintended consequences,Computer security,Inference,Computer science,Artificial intelligence,Overfitting,Private information retrieval,Machine learning
DocType
Volume
Citations 
Journal
abs/1709.01604
1
PageRank 
References 
Authors
0.35
0
3
Name
Order
Citations
PageRank
Samuel Yeom121.71
Matt Fredrikson297248.56
S. Jha37921539.19