Title
VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting.
Abstract
Deep learning has become popular, and numerous cloud-based services are provided to help customers develop and deploy deep learning applications. Meanwhile, various attack techniques have also been discovered to stealthily compromise the modelu0027s integrity. When a cloud customer deploys a deep learning model in the cloud and serves it to end-users, it is important for him to be able to verify that the deployed model has not been tampered with, and the modelu0027s integrity is protected. We propose a new low-cost and self-served methodology for customers to verify that the model deployed in the cloud is intact, while having only black-box access (e.g., via APIs) to the deployed model. Customers can detect arbitrary changes to their deep learning models. Specifically, we define texttt{Sensitive-Sample} fingerprints, which are a small set of transformed inputs that make the model outputs sensitive to the modelu0027s parameters. Even small weight changes can be clearly reflected in the model outputs, and observed by the customer. Our experiments on different types of model integrity attacks show that we can detect model integrity breaches with high accuracy ($u003e$99%) and low overhead ($u003c$10 black-box model accesses).
Year
Venue
Field
2018
arXiv: Cryptography and Security
Computer security,Computer science,Artificial intelligence,Deep learning,Small set,Deep neural networks,Distributed computing,Cloud computing
DocType
Volume
Citations 
Journal
abs/1808.03277
0
PageRank 
References 
Authors
0.34
30
3
Name
Order
Citations
PageRank
Zecheng He1255.05
Tianwei Zhang28621.44
Ruby Lee32460261.28