Title
Adversarial Attacks and Defenses on Graphs
Abstract
AbstractDeep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.
Year
DOI
Venue
2020
10.1145/3447556.3447566
SIGKDD
DocType
Volume
Issue
Journal
22
2
ISSN
Citations 
PageRank 
1931-0145
2
0.39
References 
Authors
19
7
Name
Order
Citations
PageRank
Wei Jin18325.25
Yaxin Li222.76
Han Xu321.41
Yiqi Wang4313.77
Shuiwang Ji52579122.25
Charu C. Aggarwal69081636.68
Jiliang Tang73323140.81