Title
Training Data Poisoning in ML-CAD: Backdooring DL-Based Lithographic Hotspot Detectors
Abstract
Recent efforts to enhance computer-aided design (CAD) flows have seen the proliferation of machine learning (ML)-based techniques. However, despite achieving state-of-the-art performance in many domains, techniques, such as deep learning (DL) are susceptible to various adversarial attacks. In this work, we explore the threat posed by training data poisoning attacks where a malicious insider can try to insert backdoors into a deep neural network (DNN) used as part of the CAD flow. Using a case study on lithographic hotspot detection, we explore how an adversary can contaminate training data with specially crafted, yet meaningful, genuinely labeled, and design rule compliant poisoned clips. Our experiments show that very low poisoned/clean data ratio in training data is sufficient to backdoor the DNN; an adversary can “hide” specific hotspot clips at inference time by including a backdoor trigger shape in the input with ~100% success. This attack provides a novel way for adversaries to sabotage and disrupt the distributed design process. After finding that training data poisoning attacks are feasible and stealthy, we explore a potential ensemble defense against possible data contamination, showing promising attack success reduction. Our results raise fundamental questions about the robustness of DL-based systems in CAD, and we provide insights into the implications of these.
Year
DOI
Venue
2021
10.1109/TCAD.2020.3024780
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Keywords
DocType
Volume
Computer aided design,design for manufacture,machine learning (ML),robustness,security
Journal
40
Issue
ISSN
Citations 
6
0278-0070
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Kang Liu1527.60
Benjamin Tan253.58
Ramesh Karri32968224.90
Siddharth Garg401.01