Name
Affiliation
Papers
BEEN KIM
MIT, Cambridge, MA 02139 USA
36
Collaborators
Citations 
PageRank 
119
353
21.44
Referers 
Referees 
References 
969
820
364
Search Limit
100969
Title
Citations
PageRank
Year
Debugging Tests for Model Explanations00.342020
Concept Bottleneck Models00.342020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks00.342020
An Evaluation of the Human-Interpretability of Explanation.10.352019
Visualizing and Measuring the Geometry of BERT.00.342019
Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making.50.462019
Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks.00.342019
Towards Automatic Concept-based Explanations40.402019
Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure.00.342019
A Benchmark for Interpretability Methods in Deep Neural Networks50.392019
Visualizing and Measuring the Geometry of BERT00.342019
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV).230.682018
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation.90.432018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values.80.452018
Evaluating Feature Importance Estimates.40.392018
xGEMs: Generating Examplars to Explain Black-Box Models.10.352018
Learning how to explain neural networks: PatternNet and PatternAttribution170.582018
To Trust Or Not To Trust A Classifier.90.442018
Human-in-the-Loop Interpretability Prior.20.362018
Sanity Checks for Saliency Maps.180.662018
SmoothGrad: removing noise by adding noise.511.412017
The (Un)reliability of saliency methods.190.732017
QSAnglyzer: Visual Analytics for Prismatic Analysis of Question Answering System Evaluations00.342017
A Roadmap for a Rigorous Science of Interpretability.421.932017
Examples are not enough, learn to criticize! Criticism for Interpretability.110.482016
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification.311.482015
Inferring Team Task Plans from Human Meetings: A Generative Modeling Approach with Logic-Based Prior.40.402015
Coherent Predictive Inference under Exchangeability with Imprecise Probabilities.40.612015
Scalable And Interpretable Data Representation For High-Dimensional, Complex Data40.432015
Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction180.962015
Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior.10.362013
Machine Learning for Meeting Analysis.00.342013
Learning about meetings40.422013
Quantitative estimation of the strength of agreements in goal-oriented meetings20.372013
Human-Inspired Techniques for Human-Machine Team Planning.00.342012
Multiple relative pose graphs for robust cooperative mapping562.532010