Abstract | ||
---|---|---|
Web addresses, or Uniform Resource Locators (URLs), represent a vector by which attackers are able to deliver a multitude of unwanted and potentially harmful effects to users through malicious software. The ability to detect and block access to such URLs has traditionally been enabled through reactive and labour intensive means such as human verification and whitelists and blacklists. Machine Learning has shown great potential to automate this defence and position it as proactive through the implementation of classifier models. Work in this area has produced numerous high-accuracy models, though the algorithms themselves remain fragile to adversarial manipulation if implemented without consideration being given to their security. Our work aims to investigate the robustness of several classifiers for malicious URL detection by randomly perturbing samples in the training data. It is shown that without a measure of defence to adversarial influence, highly accurate malicious URL detection can be significantly and adversely affected at even low degrees of training data perturbation. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1007/978-3-030-86586-3_5 | TRUST, PRIVACY AND SECURITY IN DIGITAL BUSINESS (TRUSTBUS 2021) |
Keywords | DocType | Volume |
Malicious URL, Detection, Adversarial machine learning | Conference | 12927 |
ISSN | Citations | PageRank |
0302-9743 | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bruno Marchand | 1 | 0 | 0.34 |
Nikolaos Pitropakis | 2 | 39 | 8.40 |
william j buchanan | 3 | 108 | 27.45 |
Costas Lambrinoudakis | 4 | 393 | 46.57 |