Title
Balancing Human Efforts and Performance of Student Response Analyzer in Dialog-Based Tutors.
Abstract
Accurately interpreting student responses is a critical requirement of dialog-based intelligent tutoring systems. The accuracy of supervised learning methods, used for interpreting or analyzing student responses, is strongly dependent on the availability of annotated training data. Collecting and grading student responses is tedious, time-consuming, and expensive. This work proposes an iterative data collection and grading approach. We show that data collection efforts can be significantly reduced by predicting question difficulty and by collecting answers from a focused set of students. Further, grading efforts can be reduced by filtering student answers that may not be helpful in training Student Response Analyzer (SRA). To ensure the quality of grades, we analyze the grader characteristics, and show improvement when a biased grader is removed. An experimental evaluation on a large scale dataset shows a reduction of up to 28% in the data collection cost, and up to 10% in grading cost while improving the response analysis macro-average F1.
Year
Venue
Field
2018
AIED
Dialog box,Training set,Data collection,Response analysis,Grading (education),Computer science,Filter (signal processing),Supervised learning,Artificial intelligence,Spectrum analyzer,Machine learning
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
15
5
Name
Order
Citations
PageRank
Tejas I. Dhamecha11008.32
Smit Marvaniya2153.63
Swarnadeep Saha310.35
Renuka Sindhgatta416921.49
Bikram Sengupta539337.73