Abstract | ||
---|---|---|
Safety critical systems strongly require the quality aspects of artificial intelligence including explainability. In this paper, we analyzed a trained network to extract features which mainly contribute the inference. Based on the analysis, we developed a simple solution to generate explanations of the inference processes. |
Year | Venue | Field |
---|---|---|
2017 | arXiv: Computer Vision and Pattern Recognition | Life-critical system,Inference,Computer science,Artificial intelligence,Network analysis,Machine learning |
DocType | Volume | Citations |
Journal | abs/1712.02890 | 0 |
PageRank | References | Authors |
0.34 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hiroshi Kuwajima | 1 | 0 | 1.01 |
Masayuki Tanaka | 2 | 39 | 7.44 |