Title
Enabling Fair ML Evaluations for Security
Abstract
Machine learning is widely used in security research to classify malicious activity, ranging from malware to malicious URLs and network traffic. However, published performance numbers often seem to leave little room for improvement and, due to a wide range of datasets and configurations, cannot be used to directly compare alternative approaches; moreover, most evaluations have been found to suffer from experimental bias which positively inflates results. In this manuscript we discuss the implementation of Tesseract, an open-source tool to evaluate the performance of machine learning classifiers in a security setting mimicking a deployment with typical data feeds over an extended period of time. In particular, Tesseract allows for a fair comparison of different classifiers in a realistic scenario, without disadvantaging any given classifier. Tesseract is available as open-source to provide the academic community with a way to report sound and comparable performance results, but also to help practitioners decide which system to deploy under specific budget constraints.
Year
DOI
Venue
2018
10.1145/3243734.3278505
computer and communications security
Keywords
DocType
ISBN
Evaluation, Malware, Machine Learning, Experimental Bias
Conference
978-1-4503-5693-0
Citations 
PageRank 
References 
0
0.34
10
Authors
5
Name
Order
Citations
PageRank
Feargus Pendlebury1112.30
Fabio Pierazzi2152.35
Roberto Jordaney3111.58
Johannes Kinder446423.49
Lorenzo Cavallaro588652.85