Title
Interpretable Visualizations of Deep Neural Networks for Domain Generation Algorithm Detection
Abstract
Due to their success in many application areas, deep learning models have found wide adoption for many problems. However, their black-box nature makes it hard to trust their decisions and to evaluate their line of reasoning. In the field of cybersecurity, this lack of trust and understanding poses a significant challenge for the utilization of deep learning models. Thus, we present a visual analytics system that provides designers of deep learning models for the classification of domain generation algorithms with understandable interpretations of their model. We cluster the activations of the model's nodes and leverage decision trees to explain these clusters. In combination with a 2D projection, the user can explore how the model views the data at different layers. In a preliminary evaluation of our system, we show how it can be employed to better understand misclassifications, identify potential biases and reason about the role different layers in a model may play.
Year
DOI
Venue
2020
10.1109/VizSec51108.2020.00010
2020 IEEE Symposium on Visualization for Cyber Security (VizSec)
Keywords
DocType
ISSN
Explainable artificial intelligence (XAI),visual analytics,model visualization,DGA detection,cybersecurity
Conference
2639-4359
ISBN
Citations 
PageRank 
978-1-7281-8263-6
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Franziska Becker100.68
Drichel Arthur221.76
Christoph Müller311.09
Thomas Ertl44417401.52