Abstract | ||
---|---|---|
In recent years, machine learning (ML) models have been successfully applied in a variety of real-world applications. However, they are often complex and incomprehensible to human users. This can decrease trust in their outputs and render their usage in critical settings ethically problematic. As a result, several methods for explaining such ML models have been proposed recently, in particular for black-box models such as deep neural networks (NNs). Nevertheless, these methods predominantly explain model outputs in terms of inputs, disregarding the inner workings of the ML model computing those outputs. We present Argflow, a toolkit enabling the generation of a variety of 'deep' argumentative explanations (DAXs) for outputs of NNs on classification tasks. |
Year | DOI | Venue |
---|---|---|
2021 | 10.5555/3463952.3464229 | AAMAS |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 10 |
Name | Order | Citations | PageRank |
---|---|---|---|
Adam Dejl | 1 | 0 | 0.34 |
Peter He | 2 | 0 | 0.34 |
Pranav Mangal | 3 | 0 | 0.34 |
Hasan Mohsin | 4 | 0 | 0.34 |
Bogdan Surdu | 5 | 0 | 0.34 |
Eduard Voinea | 6 | 0 | 0.34 |
Emanuele Albini | 7 | 2 | 2.08 |
Piyawat Lertvittayakumjorn | 8 | 2 | 3.06 |
Antonio Rago | 9 | 34 | 7.11 |
Francesca Toni | 10 | 343 | 27.02 |