Abstract | ||
---|---|---|
c-rater is Educational Testing Service's technology for the content scoring of short student responses. A major step in the scoring process is Model Building where variants of model answers are generated that correspond to the rubric for each item or test question. Until recently, Model Building was knowledge-engineered (KE) and hence labor and time intensive. In this paper, we describe our approach to automating Model Building in c-rater. We show that c-rater achieves comparable accuracy on automatically built and KE models. |
Year | Venue | Keywords |
---|---|---|
2009 | TextInfer@ACL | educational testing service,content scoring,automating model building,ke model,comparable accuracy,model answer,test question,short student response,major step,scoring process,model building,knowledge engineering |
DocType | Volume | Citations |
Conference | W09-25 | 3 |
PageRank | References | Authors |
0.40 | 5 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jana Z. Sukkarieh | 1 | 21 | 3.70 |
Svetlana Stoyanchev | 2 | 104 | 13.61 |