Title
Taxonomy and Survey of Interpretable Machine Learning Method
Abstract
Since traditional machine learning (ML) techniques use black-box model, the internal operation of the classifier is unknown to human. Due to this black-box nature of the ML classifier, the trustworthiness of their predictions is sometimes questionable. Interpretable machine learning (IML) is a way of dissecting the ML classifiers to overcome this shortcoming and provide a more reasoned explanation of model predictions. In this paper, we explore several IML methods and their applications in various domains. Moreover, a detailed survey of IML methods along with identifying the essential building blocks of a black-box model is presented here. Herein, we have identified and described the requirements of IML methods and for completeness, a taxonomy of IML methods which classifies each into distinct groupings or sub-categories, is proposed. The goal, therefore, is to describe the state-of-the-art for IML methods and explain those in more concrete and understandable ways by providing better basis of knowledge for those building blocks and our associated requirements analysis.
Year
DOI
Venue
2020
10.1109/SSCI47803.2020.9308404
2020 IEEE Symposium Series on Computational Intelligence (SSCI)
Keywords
DocType
ISBN
Interpretable machine learning,taxonomy,survey,black box machine learning,machine learning
Conference
978-1-7281-2548-0
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Saikat Das100.34
Namita Agarwal200.34
Deepak Venugopal383.15
Frederick Sheldon48616.46
Sajjan G. Shiva511623.02