Title
Exbert: A Visual Analysis Tool To Explore Learned Representations In Transformer Models
Abstract
Large Transformer-based language models can route and reshape complex information via their multi-headed attention mechanism. Although the attention never receives explicit supervision, it can exhibit recognizable patterns following linguistic or positional information. Analyzing the learned representations and attentions is paramount to furthering our understanding of the inner workings of these models. However, analyses have to catch up with the rapid release of new models and the growing diversity of investigation techniques. To support analysis for a wide variety of models, we introduce ExBERT, a tool to help humans conduct flexible, interactive investigations and formulate hypotheses for the model-internal reasoning process. EXBERT provides insights into the meaning of the contextual representations and attention by matching a human-specified input to similar contexts in large annotated datasets. By aggregating the annotations of the matched contexts, EXBERT can quickly replicate findings from literature and extend them to previously not analyzed models.
Year
DOI
Venue
2020
10.18653/v1/2020.acl-demos.22
ACL (demo)
DocType
Volume
Citations 
Conference
2020.acl-demos
1
PageRank 
References 
Authors
0.35
0
3
Name
Order
Citations
PageRank
Benjamin Hoover110.69
Hendrik Strobelt238721.65
Sebastian Gehrmann38410.58