Title
Systematic testing of refactoring engines on real software projects
Abstract
Testing refactoring engines is a challenging problem that has gained recent attention in research. Several techniques were proposed to automate generation of programs used as test inputs and to help developers in inspecting test failures. However, these techniques can require substantial effort for writing test generators or finding unique bugs, and do not provide an estimate of how reliable refactoring engines are for refactoring tasks on real software projects. This paper evaluates an end-to-end approach for testing refactoring engines and estimating their reliability by (1) systematically applying refactorings at a large number of places in well-known, open-source projects and collecting failures during refactoring or while trying to compile the refactored projects, (2) clustering failures into a small, manageable number of failure groups, and (3) inspecting failures to identify non-duplicate bugs. By using this approach on the Eclipse refactoring engines for Java and C, we already found and reported 77 new bugs for Java and 43 for C. Despite the seemingly large numbers of bugs, we found these refactoring engines to be relatively reliable, with only 1.4% of refactoring tasks failing for Java and 7.5% for C.
Year
DOI
Venue
2013
10.1007/978-3-642-39038-8_26
ECOOP
Keywords
Field
DocType
end-to-end approach,systematic testing,test input,eclipse refactoring engine,large number,testing refactoring engine,refactoring task,real software project,reliable refactoring engine,refactoring engine,test generator,test failure
Programming language,Computer science,Compiler,Software,Eclipse,Cluster analysis,Code refactoring,Java,Systematic testing
Conference
Volume
ISSN
Citations 
7920
0302-9743
14
PageRank 
References 
Authors
0.60
30
6
Name
Order
Citations
PageRank
Milos Gligoric150435.62
Farnaz Behrang2525.01
Yilong Li3140.60
Jeffrey Overbey4727.26
Munawar Hafiz522415.40
Darko Marinov62502126.52