Title
VeriDL: Integrity Verification of Outsourced Deep Learning Services
Abstract
Deep neural networks (DNNs) are prominent due to their superior performance in many fields. The deep-learning-as-a-service (DLaaS) paradigm enables individuals and organizations (clients) to outsource their DNN learning tasks to the cloud-based platforms. However, the DLaaS server may return incorrect DNN models due to various reasons (e.g., Byzantine failures). This raises the serious concern of how to verify if the DNN models trained by potentially untrusted DLaaS servers are indeed correct. To address this concern, in this paper, we design VERIDL, a framework that supports efficient correctness verification of DNN models in the DLaaS paradigm. The key idea of VERIDL is the design of a small-size cryptographic proof of the training process of the DNN model, which is associated with the model and returned to the client. Through the proof, VERIDL can verify the correctness of the DNN model returned by the DLaaS server with a deterministic guarantee and cheap overhead. Our experiments on four real-world datasets demonstrate the efficiency and effectiveness of VERIDL.
Year
DOI
Venue
2021
10.1007/978-3-030-86520-7_36
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT II
Keywords
DocType
Volume
Deep learning, Integrity verification, Deep-learning-as-a-service
Conference
12976
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Boxiang Dong1179.45
Bo Zhang29547.19
Wendy Hui Wang313313.82