Abstract | ||
---|---|---|
Machine learning (ML) based decision making is becoming commonplace. For persons affected by ML-based decisions, a certain level of transparency regarding the properties of the underlying ML model can be fundamental. In this vision paper, we propose to issue consumer labels for trained and published ML models. These labels primarily target machine learning lay persons, such as the operators of an ML system, the executors of decisions, and the decision subjects themselves. Provided that consumer labels comprehensively capture the characteristics of the trained ML model, consumers are enabled to recognize when human intelligence should supersede artificial intelligence. In the long run, we envision a service that generates these consumer labels (semi-)automatically. In this paper, we survey the requirements that an ML system should meet, and correspondingly, the properties that an ML consumer label could capture. We further discuss the feasibility of operationalizing and benchmarking these requirements in the automated generation of ML consumer labels. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CogMI48466.2019.00033 | 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI) |
Keywords | DocType | ISBN |
Artificial-intelligence-machine-learning-consumer-labels-transparency-x-AI | Conference | 978-1-7281-6738-1 |
Citations | PageRank | References |
0 | 0.34 | 15 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Christin Seifert | 1 | 0 | 1.35 |
Stefanie Scherzinger | 2 | 209 | 20.82 |
Lena Wiese | 3 | 139 | 22.55 |