Abstract | ||
---|---|---|
In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we verify the effectiveness of two methods that incorporate a BERT-based pre-trained model developed by Cui et al. (2020) into an encoder-decoder model on Chinese grammatical error correction tasks. We also analyze the error type and conclude that sentence-level errors are yet to be addressed. |
Year | Venue | DocType |
---|---|---|
2020 | AACL/IJCNLP | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hongfei Wang | 1 | 7 | 2.34 |
Michiki Kurosawa | 2 | 0 | 0.68 |
Satoru Katsumata | 3 | 0 | 3.38 |
Mamoru Komachi | 4 | 241 | 44.56 |