Title | ||
---|---|---|
Comparing Three Notations for Defining Scenario-Based Model Tests: A Controlled Experiment |
Abstract | ||
---|---|---|
Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1109/QUATIC.2014.19 | Quality of Information and Communications Technology |
Keywords | DocType | ISBN |
software systems,fully-structured textual notation,program testing,task analysis,scenario-based model tests,maintenance tasks,software maintenance,requirement specification,diagrammatic notation,software testing,natural language processing,software models,formal specification,semi-structured natural-language notation | Conference | 978-1-4799-6132-0 |
Citations | PageRank | References |
10 | 0.46 | 18 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bernhard Hoisl | 1 | 82 | 7.83 |
Stefan Sobernig | 2 | 143 | 18.97 |
Mark Strembeck | 3 | 874 | 57.86 |