Title
An Abstract Architecture for Explainable Autonomy in Hazardous Environments
Abstract
Autonomous robotic systems are being proposed for use in hazardous environments, often to reduce the risks to human workers. In the immediate future, it is likely that human workers will continue to use and direct these autonomous robots, much like other computerised tools but with more sophisticated decision-making. Therefore, one important area on which to focus engineering effort is ensuring that these users trust the system. Recent literature suggests that explainability is closely related to how trustworthy a system is. Like safety and security properties, explainability should be designed into a system, instead of being added afterwards. This paper presents an abstract architecture that supports an autonomous system explaining its behaviour (explainable autonomy), providing a design template for implementing explainable autonomous systems. We present a worked example of how our architecture could be applied in the civil nuclear industry, where both workers and regulators need to trust the system’s decision-making capabilities.
Year
DOI
Venue
2022
10.1109/REW56159.2022.00027
2022 IEEE 30th International Requirements Engineering Conference Workshops (REW)
Keywords
DocType
ISSN
Autonomous Systems Explainable AI Explainable Autonomy Software Architecture Rational Agents
Conference
2770-6826
ISBN
Citations 
PageRank 
978-1-6654-6001-9
0
0.34
References 
Authors
12
3
Name
Order
Citations
PageRank
Matt Luckcuck100.34
Hazel M Taylor200.34
Marie Farrell300.34