Title | ||
---|---|---|
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models |
Abstract | ||
---|---|---|
We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse set of workflows, including exploring counterfactuals for sentiment analysis, measuring gender bias in coreference systems, and exploring local behavior in text generation. LIT supports a wide range of models--including classification, seq2seq, and structured prediction--and is highly extensible through a declarative, framework-agnostic API. LIT is under active development, with code and full documentation available at https://github.com/pair-code/lit. |
Year | DOI | Venue |
---|---|---|
2020 | 10.18653/V1/2020.EMNLP-DEMOS.15 | EMNLP |
DocType | Volume | Citations |
Conference | 2020.emnlp-demos | 0 |
PageRank | References | Authors |
0.34 | 0 | 11 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ian Tenney | 1 | 4 | 3.79 |
James Wexler | 2 | 80 | 4.40 |
Jasmijn Bastings | 3 | 0 | 0.68 |
Tolga Bolukbasi | 4 | 115 | 8.02 |
Andy Coenen | 5 | 1 | 2.37 |
Sebastian Gehrmann | 6 | 84 | 10.58 |
Ellen Jiang | 7 | 0 | 0.34 |
Mahima Pushkarna | 8 | 0 | 0.34 |
Carey Radebaugh | 9 | 0 | 0.34 |
Emily Reif | 10 | 5 | 1.81 |
Ann Yuan | 11 | 4 | 1.86 |