Title
Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models
Abstract
Deep learning models, especially those large-scale and high-performance ones, can be very costly to train, demanding a considerable amount of data and computational resources. As a result, deep learning models have become one of the most valuable assets in modern artificial intelligence. Unauthorized duplication or reproduction of deep learning models can lead to copyright infringement and cause huge economic losses to model owners, calling for effective copyright protection techniques. Existing protection techniques are mostly based on watermarking, which embeds an owner-specified watermark into the model. While being able to provide exact ownership verification, these techniques are 1) invasive, i.e., they need to tamper with the training process, which may affect the model utility or introduce new security risks into the model; 2) prone to adaptive attacks that attempt to remove/replace the watermark or adversarially block the retrieval of the watermark; and 3) not robust to the emerging model extraction attacks. Latest fingerprinting work on deep learning models, though being non-invasive, also falls short when facing the diverse and ever-growing attack scenarios.In this paper, we propose a novel testing framework for deep learning copyright protection: DEEPJUDGE. DEEPJUDGE quantitatively tests the similarities between two deep learning models: a victim model and a suspect model. It leverages a diverse set of testing metrics and efficient test case generation algorithms to produce a chain of supporting evidence to help determine whether a suspect model is a copy of the victim model. Advantages of DEEPJUDGE include: 1) non-invasive, as it works directly on the model and does not tamper with the training process; 2) efficient, as it only needs a small set of seed test cases and a quick scan of the two models; 3) flexible, i.e., it can easily incorporate new testing metrics or test case generation methods to obtain more confident and robust judgement; and 4) fairly robust to model extraction attacks and adaptive attacks. We verify the effectiveness of DEEPJUDGE under three typical copyright infringement scenarios, including model finetuning, pruning and extraction, via extensive experiments on both image classification and speech recognition datasets with a variety of model architectures.
Year
DOI
Venue
2022
10.1109/SP46214.2022.9833747
2022 IEEE Symposium on Security and Privacy (SP)
Keywords
DocType
ISSN
victim model,suspect model,deep learning,model extraction attacks,copyright protection,security risks,DEEPJUDGE,testing metrics,test case generation
Conference
1081-6011
ISBN
Citations 
PageRank 
978-1-6654-1317-6
0
0.34
References 
Authors
13
9
Name
Order
Citations
PageRank
Jialuo Chen110.70
wang jingyi27216.19
Tinglan Peng300.34
Youcheng Sun400.34
Peng Cheng5148185.79
Shouling Ji661656.91
Xingjun Ma711.37
Bo Li8971111.71
Dawn Song97084442.36