Abstract | ||
---|---|---|
Cloze test is widely adopted in language exams to evaluate studentsu0027 language proficiency. In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams. With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets. We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data. We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck. In addition, we find that human-designed data leads to a larger gap between the modelu0027s performance and human performance when compared to automatically generated data. |
Year | Venue | Field |
---|---|---|
2017 | arXiv: Computation and Language | Bottleneck,Attention span,Language proficiency,Computer science,TRACE (psycholinguistics),Natural language processing,Artificial intelligence,Cloze test,Language understanding,Performance gap |
DocType | Volume | Citations |
Journal | abs/1711.03225 | 2 |
PageRank | References | Authors |
0.36 | 12 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Qizhe Xie | 1 | 2 | 1.04 |
Guokun Lai | 2 | 244 | 8.90 |
Zihang Dai | 3 | 171 | 12.81 |
Eduard H. Hovy | 4 | 7450 | 663.27 |