Title
Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool.
Abstract
In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students' responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. 'A-TEST' was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived 'A-TEST' model and the human raters were comparable to those between the human raters.
Year
DOI
Venue
2013
10.1007/978-3-319-08368-1_31
Lecture Notes of the Institute for Computer Sciences Social Informatics and Telecommunications Engineering
Keywords
Field
DocType
Feature extraction,Essay scoring,Text analysis,Text mining,Latent semantic analysis (LSA),SVD,Natural language process- NLP,AES
Test evaluation,Kappa,Amrita,Computer science,Feature extraction,Natural language processing,Artificial intelligence,Latent semantic analysis,Course materials
Conference
Volume
ISSN
Citations 
135
1867-8211
0
PageRank 
References 
Authors
0.34
7
3
Name
Order
Citations
PageRank
Prema Nedungadi13313.10
Jyothi L200.34
Raghu Raman3299.66