Abstract | ||
---|---|---|
Robotics and autonomy systems are becoming increasingly important, moving from specialised factory domains to increasingly general and consumer-focused applications. As such systems grow ubiquitous, there is a commensurate need to protect against potentially catastrophic harm. System-level testing in simulation is a particularly promising approach for assuring robotics systems, allowing for more extensive testing in realistic scenarios and seeking bugs that may not manifest at the unit-level. Ideally, such testing could find critical bugs well before expensive field-testing is required. However, simulations can only model coarse environmental abstractions, contributing to a common perception that robotics bugs can only be found in live deployment. To address this gap, we conduct an empirical study on bugs that have been fixed in the widely used, open-source ArduPilot system. We identify bug-fixing commits by exploiting commenting conventions in the version-control history. We provide a quantitative and qualitative evaluation of the bugs, focusing on characterising how the bugs are triggered and how they can be detected, with a goal of identifying how they can be best identified in simulation, well before field testing. To our surprise, we find that the majority of bugs manifest under simple conditions that can be easily reproduced in software-based simulation. Conversely, we find that system configurations and forms of input play an important role in triggering bugs. We use these results to inform a novel framework for testing for these and other bugs in simulation, consistently and reproducibly. These contributions can inform the construction of techniques for automated testing of robotics systems, with the goal of finding bugs early and cheaply, without incurring the costs of physically testing for bugs in live systems. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ICST.2018.00040 | 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST) |
Keywords | Field | DocType |
automated testing,empirical study,robotics,autonomous vehicles,dataset,repository mining,ardupilot,testing,simulation | Software deployment,Software engineering,Computer science,Software bug,Robot kinematics,Software,Autonomous system (Internet),Artificial intelligence,Surprise,Empirical research,Robotics,Reliability engineering | Conference |
ISSN | ISBN | Citations |
2381-2834 | 978-1-5386-5013-4 | 3 |
PageRank | References | Authors |
0.42 | 23 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Christopher Steven Timperley | 1 | 12 | 2.02 |
Afsoon Afzal | 2 | 17 | 3.77 |
Deborah S. Katz | 3 | 5 | 2.16 |
Jam Marcos Hernandez | 4 | 3 | 0.42 |
Claire Le Goues | 5 | 1766 | 68.79 |