Title
Large-Scale Evaluation of the Efficiency of Runtime-Verification Tools in the Wild
Abstract
Runtime verification (RV) is a field of study which suffers from a lack of dedicated benchmarks. Many published evaluations of RV tools rely on workloads which are not representative of real-world programs. In this paper, we present a methodology to automatically discover relevant open-source projects for evaluating RV tools. This is done by analyzing unit tests on a large number of projects hosted on GitHub. Our evaluation shows that analyzing a large number of open-source projects—instead of a handful of manually selected workloads—provides better insight into the behavior of three state-of-the-art RV tools (JavaMOP, MarQ, and Muffin) based on two metrics (memory utilization and runtime overhead). By monitoring test executions of a large number of projects, we show that none of the evaluated RV tools wins for both metrics.
Year
DOI
Venue
2018
10.1109/APSEC.2018.00091
2018 25th Asia-Pacific Software Engineering Conference (APSEC)
Keywords
Field
DocType
Tools,Benchmark testing,Open source software,Monitoring,Runtime,Java,Measurement
Computer science,Real-time computing,Runtime verification,Embedded system
Conference
ISSN
ISBN
Citations 
1530-1362
978-1-7281-1970-0
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Omar Javed100.34
Walter Binder2107792.58