Title
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Abstract
We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse set of workflows, including exploring counterfactuals for sentiment analysis, measuring gender bias in coreference systems, and exploring local behavior in text generation. LIT supports a wide range of models--including classification, seq2seq, and structured prediction--and is highly extensible through a declarative, framework-agnostic API. LIT is under active development, with code and full documentation available at https://github.com/pair-code/lit.
Year
DOI
Venue
2020
10.18653/V1/2020.EMNLP-DEMOS.15
EMNLP
DocType
Volume
Citations 
Conference
2020.emnlp-demos
0
PageRank 
References 
Authors
0.34
0
11
Name
Order
Citations
PageRank
Ian Tenney143.79
James Wexler2804.40
Jasmijn Bastings300.68
Tolga Bolukbasi41158.02
Andy Coenen512.37
Sebastian Gehrmann68410.58
Ellen Jiang700.34
Mahima Pushkarna800.34
Carey Radebaugh900.34
Emily Reif1051.81
Ann Yuan1141.86