Title
Assurance levels for decision making in autonomous intelligent systems and their safety
Abstract
The autonomy of intelligent systems and their safety rely on their ability for local decision making based on collected environmental information. This is even more for cyber-physical systems running safety critical activities. While this intelligence is partial and fragmented, and cognitive techniques are of limited maturity, the decision function must produce results whose validity and scope must be weighted in light of the underlying assumptions, unavoidable uncertainty and hypothetical safety limitation. Besides the cognitive techniques dependability, it is about the assurance level of the decision self-making. Beyond the pure decision-making capabilities of the autonomous intelligent system, we need techniques that guarantee the system assurance required for the intended use. Security mechanisms for cognitive systems may be consequently tightly intricated. We propose a trustworthiness module which is part of the system and its resulting safety. In this paper, we briefly review the state of the art regarding the dependability of cognitive techniques, the assurance level definition in this context, and related engineering practices. We elaborate regarding the design of autonomous intelligent systems safety, then we discuss its security design and approaches for the mitigation of safety violations by the cognitive functions.
Year
DOI
Venue
2020
10.1109/DESSERT50317.2020.9125079
2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT)
Keywords
DocType
ISBN
Artificial intelligence,autonomous system decision making,safety monitoring design,assurance level
Conference
978-1-7281-9957-3
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Yannick Fourastier100.34
Claude Baron23612.88
Carsten Thomas300.34
Philippe Esteban422.42