Title
Data minimization for GDPR compliance in machine learning models
Abstract
The EU General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA) mandate the principle of data minimization, which requires that only data necessary to fulfill a certain purpose be collected. However, it can often be difficult to determine the minimal amount of data required, especially in complex machine learning models such as deep neural networks. We present a first-of-a-kind method to reduce the amount of personal data needed to perform predictions with a machine learning model, by removing or generalizing some of the input features of the runtime data. Our method makes use of the knowledge encoded within the model to produce a generalization that has little to no impact on its accuracy, based on knowledge distillation approaches. We show that, in some cases, less data may be collected while preserving the exact same level of model accuracy as before, and if a small deviation in accuracy is allowed, even more generalizations of the input features may be performed. We also demonstrate that when collecting the features dynamically, the generalizations may be even further improved. This method enables organizations to truly minimize the amount of data collected, thus fulfilling the data minimization requirement set out in the regulations.
Year
DOI
Venue
2022
10.1007/s43681-021-00095-8
AI and Ethics
Keywords
DocType
Volume
GDPR, CPRA, Compliance, Data minimization, Privacy, Machine learning, Neural networks, Knowledge distillation
Journal
2
Issue
ISSN
Citations 
3
2730-5953
0
PageRank 
References 
Authors
0.34
7
5
Name
Order
Citations
PageRank
Abigail Goldsteen1103.07
Gilad Ezov200.34
Ron Shmelkin300.34
Micha Moffie4879.35
Ariel Farkash553.47