Title
Neural Network Model Obfuscation through Adversarial Training
Abstract
With the increased commercialization of deep learning (DL) models, there is also a growing need to protect them from illicit usage. For cost- and ease of deployment reasons it is becoming increasingly common to run DL models on the hardware of third parties. Although there are some hardware mechanisms, such as Trusted Execution Environments (TEE), to protect sensitive data, their availability is still limited and not well suited to resource demanding tasks, like DL models, that benefit from hardware accelerators. In this work, we make model stealing more difficult, presenting a novel way to divide up a DL model, with the main part on normal infrastructure and a small part in a remote TEE, and train it using adversarial techniques. In initial experiments on image classification models for the Fashion MNIST and CIFAR 10 datasets, we observed that this obfuscation protection makes it significantly more difficult for an adversary to leverage the exposed model components.
Year
DOI
Venue
2022
10.1109/CCGrid54584.2022.00092
2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid)
Keywords
DocType
ISBN
model stealing,adversarial training,security,model confidentiality,TEE
Conference
978-1-6654-9957-6
Citations 
PageRank 
References 
0
0.34
3
Authors
3
Name
Order
Citations
PageRank
Jakob Sternby100.68
Björn Johansson200.34
Michael Liljenstam300.34