Abstract | ||
---|---|---|
AbstractDeep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1145/3447556.3447566 | SIGKDD |
DocType | Volume | Issue |
Journal | 22 | 2 |
ISSN | Citations | PageRank |
1931-0145 | 2 | 0.39 |
References | Authors | |
19 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wei Jin | 1 | 83 | 25.25 |
Yaxin Li | 2 | 2 | 2.76 |
Han Xu | 3 | 2 | 1.41 |
Yiqi Wang | 4 | 31 | 3.77 |
Shuiwang Ji | 5 | 2579 | 122.25 |
Charu C. Aggarwal | 6 | 9081 | 636.68 |
Jiliang Tang | 7 | 3323 | 140.81 |