Abstract | ||
---|---|---|
Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1162/tacl_a_00439 | TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS |
DocType | Volume | Citations |
Journal | 9 | 0 |
PageRank | References | Authors |
0.34 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hiroki Ouchi | 1 | 18 | 8.08 |
Junichi Suzuki | 2 | 1265 | 112.15 |
Sosuke Kobayashi | 3 | 31 | 7.03 |
Sho Yokoi | 4 | 0 | 2.03 |
Tatsuki Kuribayashi | 5 | 0 | 3.04 |
Masashi Yoshikawa | 6 | 8 | 3.52 |
Kentaro Inui | 7 | 1008 | 120.35 |