Title
MIDAS: Model Inversion Defenses Using an Approximate Memory System
Abstract
Private data constitute a significant share of the training information for machine learning (ML) algorithms. Recent works on model inversion attacks (MIA) have demonstrated that an ML model can leak information about the training dataset. We have examined the existing inversion attacks in this work and proposed a hardware-oriented security solution to defend an AI system from MIA. First, we demonstrate that an ML algorithm's execution flow in physical hardware can be leveraged to secure a trained model. Then, we find that approximate main memory, such as undervolted DRAMs, are useful in adding noise in a loaded model. Next, we design a secure algorithm MIDAS that ensures the safe execution of an ML algorithm under the presence of an adversary. After that, we evaluate MIDAS in terms of model accuracy degradation and similarity metrics. Finally, we examine MIDAS's security and privacy implication and its effectiveness in thwarting model inversion attacks. From our evaluations, we find that a hardware-dependent solution for MIA can ensure the training data privacy, even in an untrusted hardware and software stack.
Year
DOI
Venue
2020
10.1109/AsianHOST51057.2020.9358254
2020 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)
Keywords
DocType
ISBN
Hardware Oriented Security,Deep Neural Network (DNN),Model Inversion Attack (MIA),Dynamic Random Access Memory (DRAM)
Conference
978-1-7281-8953-6
Citations 
PageRank 
References 
0
0.34
0
Authors
3
Name
Order
Citations
PageRank
Qian Xu132.43
Md Tanvir Arafin211.37
Gang Qu32476270.62