Title
Bayesian networks for interpretable machine learning and optimization
Abstract
As artificial intelligence is being increasingly used for high-stakes applications, it is becoming more and more important that the models used be interpretable. Bayesian networks offer a paradigm for interpretable artificial intelligence that is based on probability theory. They provide a semantics that enables a compact, declarative representation of a joint probability distribution over the variables of a domain by leveraging the conditional independencies among them. The representation consists of a directed acyclic graph that encodes the conditional independencies among the variables and a set of parameters that encodes conditional distributions. This representation has provided a basis for the development of algorithms for probabilistic reasoning (inference) and for learning probability distributions from data. Bayesian networks are used for a wide range of tasks in machine learning, including clustering, supervised classification, multi-dimensional supervised classification, anomaly detection, and temporal modeling. They also provide a basis for estimation of distribution algorithms, a class of evolutionary algorithms for heuristic optimization. We illustrate the use of Bayesian networks for interpretable machine learning and optimization by presenting applications in neuroscience, the industry, and bioinformatics, covering a wide range of machine learning and optimization tasks.
Year
DOI
Venue
2021
10.1016/j.neucom.2021.01.138
Neurocomputing
Keywords
DocType
Volume
Interpretability,Explainable machine learning,Probabilistic graphical models
Journal
456
ISSN
Citations 
PageRank 
0925-2312
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Bojan Mihaljevic182.91
Concha Bielza290972.11
Pedro Larrañaga374.57