Abstract | ||
---|---|---|
Summary form only given. The main purpose of computational awareness (CA) is to make our human beings more aware. A well designed aware system should be able to detect novel events, inform us when the events occur, and tell us what to do. In many cases, we may not trust the machine. We want to know the reasons why the event is novel, and why we should take certain actions if an event is observed. For this purpose, it is important to interpret the reasoning process of the aware system in a human-understandable way. In this study, we provide a method for translating an aware system into an expert system. Using the expert system, it is possible to justify the reasoning process for any decision. As a case study, we show how to translate a learned 3-valuved logic multilayer perceptron into an expert system. The proposed method provides a possible way for obtaining interpretable aware systems through learning of many layered neural networks. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1109/CBD.2016.064 | 2016 International Conference on Advanced Cloud and Big Data (CBD) |
Keywords | Field | DocType |
Computational awareness,aware system,3-valued logic,multilayer perceptron,expert system | Computer science,Expert system,Multilayer perceptron,Artificial intelligence,Cognition,Artificial neural network,Big data,Machine learning | Conference |
ISBN | Citations | PageRank |
978-1-5090-3678-3 | 0 | 0.34 |
References | Authors | |
0 | 1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Qiangfu Zhao | 1 | 214 | 62.36 |