Abstract | ||
---|---|---|
Current approaches for analyzing large number of open-source projects mainly focus on data mining or on static analysis techniques. On the contrary, research applying dynamic analyses such as Runtime Verification (RV) to open-source projects is scarce. This is due to lack of automatic means for executing arbitrary pieces of software that rely on complex dependencies and input parameters. In this paper, we present a fully automated infrastructure, JUniVerse, to conduct large-scale studies on unit tests in open-source projects in the wild. The proposed infrastructure runs on a cluster for parallel execution. We demonstrate the effectiveness of JUniVerse by conducting a large-scale study on Java projects hosted on GitHub. We apply a selection criteria based on static analysis to select 3 490 active projects. To show the feasibility of JUniVerse, we choose RV as a case study, and investigate the applicability of 182 publicly available JavaMOP specifications to the code exercised by unit tests. Our study reveals that 37 (out of 182) specifications (i.e., 20%) are not applicable to the code exercised by unit tests of real-world projects. Finally, with JUniVerse, we are able to identify a set of specs and projects for future RV studies.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3297280.3297453 | SAC |
Keywords | Field | DocType |
dynamic program analysis, open-source projects, runtime verification, software-repository mining, unit testing | Software engineering,Computer science,Static analysis,Unit testing,Runtime verification,Software,Test analysis,Java,Dynamic program analysis | Conference |
ISBN | Citations | PageRank |
978-1-4503-5933-7 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Omar Javed | 1 | 0 | 0.68 |
Alex Villazón | 2 | 325 | 27.73 |
Walter Binder | 3 | 1077 | 92.58 |