Title
Interpretability via Model Extraction.
Abstract
The ability to interpret machine learning models has become increasingly important now that machine learning is used to inform consequential decisions. We propose an approach called model extraction for interpreting complex, blackbox models. Our approach approximates the complex model using a much more interpretable model; as long as the approximation quality is good, then statistical properties of the complex model are reflected in the interpretable model. We show how model extraction can be used to understand and debug random forests and neural nets trained on several datasets from the UCI Machine Learning Repository, as well as control policies learned for several classical reinforcement learning problems.
Year
Venue
Field
2017
arXiv: Learning
Interpretability,Artificial intelligence,Model extraction,Artificial neural network,Random forest,Mathematics,Machine learning,Reinforcement learning,Debugging
DocType
Volume
Citations 
Journal
abs/1706.09773
5
PageRank 
References 
Authors
0.44
4
3
Name
Order
Citations
PageRank
Osbert Bastani18710.29
Carolyn Kim2231.83
Hamsa Bastani3162.72