Title
An impact-driven approach to predict user stories instability
Abstract
A common way to describe requirements in Agile software development is through user stories, which are short descriptions of desired functionality. Nevertheless, there are no widely accepted quantitative metrics to evaluate user stories. We propose a novel metric to evaluate user stories called instability, which measures the number of changes made to a user story after it was assigned to a developer to be implemented in the near future. A user story with a high instability score suggests that it was not detailed and coherent enough to be implemented. The instability of a user story can be automatically extracted from industry-standard issue tracking systems such as Jira by performing retrospective analysis over user stories that were fully implemented. We propose a method for creating prediction models that can identify user stories that will have high instability even before they have been assigned to a developer. Our method works by applying a machine learning algorithm on implemented user stories, considering only features that are available before a user story is assigned to a developer. We evaluate our prediction models on several open-source projects and one commercial project and show that they outperform baseline prediction models.
Year
DOI
Venue
2022
10.1007/s00766-022-00372-w
Requirements Engineering
Keywords
DocType
Volume
User story, Requirements, Agile software development, Machine learning
Journal
27
Issue
ISSN
Citations 
2
0947-3602
0
PageRank 
References 
Authors
0.34
11
5
Name
Order
Citations
PageRank
Yarden Levy100.34
Roni Stern233549.62
Arnon Sturm380.88
Argaman Mordoch400.34
Yuval Bitan500.34