Abstract | ||
---|---|---|
The use of pretrained masked language models (MLMs) has drastically improved the performance of zero anaphora resolution (ZAR). We further expand this approach with a novel pretraining task and finetuning method for Japanese ZAR. Our pretraining task aims to acquire anaphoric relational knowledge necessary for ZAR from a large-scale raw corpus. The ZAR model is finetuned in the same manner as pretraining. Our experiments show that combining the proposed methods surpasses previous state-of-the-art performance with large margins, providing insight on the remaining challenges. |
Year | Venue | DocType |
---|---|---|
2021 | EMNLP | Conference |
Volume | Citations | PageRank |
2021.emnlp-main | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ryuto Konno | 1 | 0 | 0.34 |
Shun Kiyono | 2 | 0 | 3.72 |
Yuichiroh Matsubayashi | 3 | 37 | 7.26 |
Hiroki Ouchi | 4 | 18 | 8.08 |
Kentaro Inui | 5 | 1008 | 120.35 |