Title
TRIPLEx: Triple Extraction for Explanation
Abstract
Transformer-based models are used to solve a va-riety of Natural Language Processing tasks. Still, these models are opaque and poorly understandable for their users. Current approaches to explainability focus on token importance, in which the explanation consists of a set of tokens relevant to the prediction, and natural language explanations, in which the explanation is a generated piece of text. The latter are usually learned by design with models traind end-to-end to provide a prediction and an explanation, or rely on powerful external text generators to do the heavy lifting for them. In this paper we present TRIPLEX, an explainability algorithm for Transformer-based models fine-tuned on Natural Language Inference, Semantic Text Similarity, or Text Classification tasks. TRIPLEX explains Transformers-based models by extracting a set of facts from the input data, subsuming it by abstraction, and generating a set of weighted triples as explanation.
Year
DOI
Venue
2021
10.1109/CogMI52975.2021.00015
2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI)
Keywords
DocType
ISBN
Explainable Artificial Intelligence,Transformer-based models,Natural Language Inference
Conference
978-1-6654-1622-1
Citations 
PageRank 
References 
0
0.34
13
Authors
3
Name
Order
Citations
PageRank
Mattia Setzu100.34
Anna Monreale258142.49
Pasquale Minervini300.34