Abstract | ||
---|---|---|
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of source models transfer to other target models and, thus, pose a security threat to black-box applications (when attackers have no access to the target models). Current transfer-based ensemble attacks, however, only consider a limited number of source models to craf... |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TNNLS.2020.3039295 | IEEE Transactions on Neural Networks and Learning Systems |
Keywords | DocType | Volume |
Computational modeling,Task analysis,Perturbation methods,Training,Neural networks,Generative adversarial networks,Gallium nitride | Journal | 33 |
Issue | ISSN | Citations |
3 | 2162-237X | 0 |
PageRank | References | Authors |
0.34 | 27 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhaohui Che | 1 | 23 | 7.29 |
Ali Borji | 2 | 1985 | 78.50 |
Guangtao Zhai | 3 | 1707 | 145.33 |
Suiyi Ling | 4 | 17 | 8.35 |
Jing Li | 5 | 106 | 12.33 |
Xiongkuo Min | 6 | 337 | 40.88 |
Guodong Guo | 7 | 2548 | 144.00 |
Patrick Le Callet | 8 | 1252 | 111.66 |