Abstract | ||
---|---|---|
Recent research(1) has shown that machine learning models are venerable to attacks by adversaries almost at all phases of machine learning pipeline, such as positioning attacks on training data, attacks on the learning algorithm, input attacks based on carefully crafted adversarial samples, model steal and model inversion attack etc. Input samples that are maliciously created can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error. So, understanding the security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey on this emerging area: firstly, we define the processing pipeline of a generic machine learning system, then, we identify the attacks in different points of the pipeline and its potential defense solution. Finally, the research work of this paper is summarized and the further research directions are proposed. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3207677.3277988 | PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2018) |
Keywords | DocType | Citations |
machine learning, data positioning, escape attack, model inversion, Adversarial samples. | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yingchao Yu | 1 | 0 | 0.34 |
Xueyong Liu | 2 | 0 | 0.34 |
Zuoning Chen | 3 | 118 | 13.66 |