Abstract | ||
---|---|---|
Unit tests for object-oriented classes can be generated automatically using search-based testing techniques. As the search algorithms are typically guided by structural coverage criteria, the resulting unit tests are often long and confusing, with possible negative implications for developer adoption of such test generation tools, and the difficulty of the test oracle problem and test maintenance. To counter this problem, we integrate a further optimization target based on a model of test readability learned from human annotation data. We demonstrate on a selection of classes from the Guava library how this approach produces more readable unit tests without loss of coverage. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1007/978-3-319-22183-0_17 | SSBSE |
Field | DocType | Citations |
Search algorithm,Annotation,Computer science,Unit testing,Oracle,Readability,Artificial intelligence,Machine learning | Conference | 5 |
PageRank | References | Authors |
0.39 | 6 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ermira Daka | 1 | 56 | 1.89 |
José Creissac Campos | 2 | 473 | 42.36 |
Jonathan Dorn | 3 | 112 | 4.47 |
Gordon Fraser | 4 | 2625 | 116.22 |
Westley Weimer | 5 | 3510 | 162.27 |