Abstract | ||
---|---|---|
Deep neural network (DNN) is recently presenting the human-level performance on many applications like computer vision and natural language processing. However, such a promising solution is also subject to ever-increasing security challenges. Recent studies show that benign inputs polluted with intentionally created imperceptible perturbations, namely "adversarial example", can easily mislead the decision making of DNN models. To mitigate adversarial attacks, many defense solutions are proposed accordingly, such as adversarial training, gradient masking etc. Orthogonal to those techniques, in this paper, we survey a family of "smart" compression based countermeasures to protect the DNNs against adversarial attacks. These approaches systematically target several fundamental entities in data processing of DNN models, including the input feature compression through JPEG, color depth reduction or spatial smoothing, and model compression by parameter sharing mechanism. We summarize the pros and cons of enhancing the robustness of DNNs from compression techniques, and hope that compression, originally aiming at the input or DNN model size reduction, can also function as defense technique to better help secure DNN models. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ISVLSI.2018.00102 | IEEE Computer Society Annual Symposium on VLSI |
Field | DocType | ISSN |
Data modeling,Data processing,Computer science,Transform coding,Robustness (computer science),Color depth,Smoothing,JPEG,Artificial intelligence,Artificial neural network,Machine learning | Conference | 2159-3469 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |