Title
Making aware systems interpretable
Abstract
We human beings are often careless, forgetful and ignorant. We often miss good timing for capturing chances or avoiding risks. Computational awareness (CA) can help us to be more aware. An aware system can distinguish novel events from normal ones, inform us when the novel events are detected, and tell us the correct reaction. In many cases, we are interested in knowing reasons why certain situations are novel, and why certain reactions are necessary under these situations. The ability of providing understandable reasons can make the system more trustable. Designing interpretable aware systems should be an important goal of CA researches. In this article, we provide a method for translating an aware system into an expert system that in turn can be used to provide reasons for making decision. As a case study, we show the process for interpreting a learned 3-valuved logic multilayer perceptron. The proposed method should be useful for achieving the goal of CA.
Year
DOI
Venue
2016
10.1109/ICMLC.2016.7873003
2016 International Conference on Machine Learning and Cybernetics (ICMLC)
Keywords
Field
DocType
Computational awareness,Aware systems,3-valued logic,Multilayer perceptron,Expert system
Computer science,Expert system,Multilayer perceptron,Artificial intelligence,Cognition,Machine learning,Cybernetics
Conference
Volume
ISBN
Citations 
2
978-1-5090-0391-4
3
PageRank 
References 
Authors
0.69
0
1
Name
Order
Citations
PageRank
Qiangfu Zhao121462.36