Abstract | ||
---|---|---|
Performing sound and fair fuzzer evaluations can be challenging, not only because of the randomness involved in fuzzing, but also due to the large number of fuzz tests generated. Existing evaluations use code coverage as a proxy measure for fuzzing effectiveness. Yet, instead of considering coverage of all generated fuzz inputs, they only consider the inputs stored in the fuzzer queue. However, as we show in this paper, this approach can lead to biased assessments due to path collisions. Therefore, we developed FuzzTastic, a fuzzer-agnostic coverage analyzer that allows practitioners and researchers to perform uniform fuzzer evaluations that are not affected by such collisions. In addition, its time-stamped coverage-probing approach enables frequency-based coverage analysis to identify barely tested source code and to visualize fuzzing progress over time and across code. To foster further studies in this field, we make FuzzTastic, together with a benchmark dataset worth ~12 CPU-years of fuzzing, publicly available; the demo video can be found at https://youtu.be/Lm-eBx0aePA. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1145/3510454.3516847 | 2022 IEEE/ACM 44th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) |
Keywords | DocType | ISSN |
• Security and privacy → Software security engineering. | Conference | 2574-1926 |
ISBN | Citations | PageRank |
978-1-6654-9599-8 | 0 | 0.34 |
References | Authors | |
10 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Stephan Lipp | 1 | 0 | 0.34 |
Daniel Elsner | 2 | 0 | 0.34 |
Thomas Hutzelmann | 3 | 4 | 1.15 |
Sebastian Banescu | 4 | 0 | 0.34 |
Alexander Pretschner | 5 | 26 | 9.69 |
Marcel Böhme | 6 | 0 | 0.34 |