Debugging Tests for Model Explanations | 0 | 0.34 | 2020 |
Concept Bottleneck Models | 0 | 0.34 | 2020 |
On Completeness-aware Concept-Based Explanations in Deep Neural Networks | 0 | 0.34 | 2020 |
An Evaluation of the Human-Interpretability of Explanation. | 1 | 0.35 | 2019 |
Visualizing and Measuring the Geometry of BERT. | 0 | 0.34 | 2019 |
Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. | 5 | 0.46 | 2019 |
Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks. | 0 | 0.34 | 2019 |
Towards Automatic Concept-based Explanations | 4 | 0.40 | 2019 |
Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure. | 0 | 0.34 | 2019 |
A Benchmark for Interpretability Methods in Deep Neural Networks | 5 | 0.39 | 2019 |
Visualizing and Measuring the Geometry of BERT | 0 | 0.34 | 2019 |
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). | 23 | 0.68 | 2018 |
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. | 9 | 0.43 | 2018 |
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values. | 8 | 0.45 | 2018 |
Evaluating Feature Importance Estimates. | 4 | 0.39 | 2018 |
xGEMs: Generating Examplars to Explain Black-Box Models. | 1 | 0.35 | 2018 |
Learning how to explain neural networks: PatternNet and PatternAttribution | 17 | 0.58 | 2018 |
To Trust Or Not To Trust A Classifier. | 9 | 0.44 | 2018 |
Human-in-the-Loop Interpretability Prior. | 2 | 0.36 | 2018 |
Sanity Checks for Saliency Maps. | 18 | 0.66 | 2018 |
SmoothGrad: removing noise by adding noise. | 51 | 1.41 | 2017 |
The (Un)reliability of saliency methods. | 19 | 0.73 | 2017 |
QSAnglyzer: Visual Analytics for Prismatic Analysis of Question Answering System Evaluations | 0 | 0.34 | 2017 |
A Roadmap for a Rigorous Science of Interpretability. | 42 | 1.93 | 2017 |
Examples are not enough, learn to criticize! Criticism for Interpretability. | 11 | 0.48 | 2016 |
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification. | 31 | 1.48 | 2015 |
Inferring Team Task Plans from Human Meetings: A Generative Modeling Approach with Logic-Based Prior. | 4 | 0.40 | 2015 |
Coherent Predictive Inference under Exchangeability with Imprecise Probabilities. | 4 | 0.61 | 2015 |
Scalable And Interpretable Data Representation For High-Dimensional, Complex Data | 4 | 0.43 | 2015 |
Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction | 18 | 0.96 | 2015 |
Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior. | 1 | 0.36 | 2013 |
Machine Learning for Meeting Analysis. | 0 | 0.34 | 2013 |
Learning about meetings | 4 | 0.42 | 2013 |
Quantitative estimation of the strength of agreements in goal-oriented meetings | 2 | 0.37 | 2013 |
Human-Inspired Techniques for Human-Machine Team Planning. | 0 | 0.34 | 2012 |
Multiple relative pose graphs for robust cooperative mapping | 56 | 2.53 | 2010 |