Title
Evaluating Human And Automated Generation Of Distractors For Diagnostic Multiple-Choice Cloze Questions To Assess Children'S Reading Comprehension
Abstract
We report an experiment to evaluate DQGen's performance in generating three types of distractors for diagnostic multiple-choice cloze (fill-in-the-blank) questions to assess children's reading comprehension processes. Ungrammatical distractors test syntax, nonsensical distractors test semantics, and locally plausible distractors test inter-sentential processing. 27 knowledgeable humans rated candidate answers as correct, plausible, nonsensical, or ungrammatical without knowing their intended type or whether they were generated by DQGen, written by other humans, or correct. Surprisingly, DQGen did significantly better than humans at generating ungrammatical distractors and slightly better than them at generating nonsensical distractors, albeit worse at generating plausible distractors. Vetting its output and writing distractors only when necessary would take half as long as writing them all, and improve their quality.
Year
DOI
Venue
2015
10.1007/978-3-319-19773-9_16
ARTIFICIAL INTELLIGENCE IN EDUCATION, AIED 2015
Keywords
Field
DocType
Question generation, Reading comprehension, Cloze, Distractors
Vetting,Reading comprehension,Psychology,Cognitive psychology,Question generation,Syntax,Semantics,Multiple choice
Conference
Volume
ISSN
Citations 
9112
0302-9743
2
PageRank 
References 
Authors
0.39
7
2
Name
Order
Citations
PageRank
Yi-Ting Huang140.78
Jack Mostow21133263.51