Abstract | ||
---|---|---|
Artificial Intelligent (AI) techniques, such as ma-chine learning (ML), have been making significant progress over the past decade. Many systems have been applied in sensitive tasks involving critical infrastructures which affect human well-being or health. Before deploying an AI system, it is necessary to validate its behavior and guarantee that it will continue to perform as expected when deployed in a real-world environment. For this reason, it is important to comprehend specific aspects of such systems. For example, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images via feature visualization often focuses on explaining predictions for neurons of one single convolutional layer. Not presenting a global perspective over the features learned by the model leads the user to miss the bigger picture. In this work we focus on providing a representation based on the structure of deep neural networks. It presents a visualization able to give the user a global perspective over the feature maps of a convolutional neural network (CNN) in a single image, revealing potential problems of the learning representations present in the network feature maps. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/IV51561.2020.00054 | 2020 24th International Conference Information Visualisation (IV) |
Keywords | DocType | ISSN |
Deep Learning Interpretability,Convolutional Neural Networks Feature Visualization,Concentric Ring Design | Conference | 1550-6037 |
ISBN | Citations | PageRank |
978-1-7281-9135-5 | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
João Alves | 1 | 7 | 4.55 |
Tiago Araújo | 2 | 5 | 2.17 |
Bernardo Marques | 3 | 0 | 0.34 |
Paulo Dias | 4 | 0 | 0.34 |
Beatriz Sousa Santos | 5 | 5 | 2.18 |