Title | ||
---|---|---|
Uncertainty Wrappers For Data-Driven Models Increase The Transparency Of Ai/Ml-Based Models Through Enrichment With Dependable Situation-Aware Uncertainty Estimates |
Abstract | ||
---|---|---|
In contrast to established safety-critical software components, we can neither prove nor assume that the outcomes of components containing models based on artificial intelligence (AI) or machine learning (ML) will be correct in any situation. Thus, uncertainty is an inherent part of decision-making when using the outcomes of data-driven models created by AI/ML algorithms. In order to deal with this - especially in the context of safety-related systems - we need to make uncertainty transparent via dependable statistical statements. This paper introduces both a conceptual model and the related mathematical foundation of an uncertainty wrapper solution for data-driven models. The wrapper enriches existing data-driven models such as provided by ML or other AI techniques with case-individual and sound uncertainty estimates. The task of traffic sign recognition is used to illustrate the approach, which considers uncertainty not only in terms of model fit but also in terms of data quality and scope compliance. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1007/978-3-030-26250-1_29 | COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2019 |
Keywords | DocType | Volume |
Artificial intelligence, Machine learning, Dependability, Safety engineering, Data quality, Operational design domain, Model validation | Conference | 11699 |
ISSN | Citations | PageRank |
0302-9743 | 0 | 0.34 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Michael Kläs | 1 | 96 | 9.50 |
Lena Sembach | 2 | 0 | 0.34 |