Title
Incorporating Uncertainty Into Deep Learning For Spoken Language Assessment
Abstract
There is a growing demand for automatic assessment of spoken English proficiency. These systems need to handle large variations in input data owing to the wide range of candidate skill levels and L1s, and errors from ASR. Some candidates will be a poor match to the training data set, undermining the validity of the predicted grade. For high stakes tests it is essential for such systems not only to grade well, but also to provide a measure of their uncertainty in their predictions, enabling rejection to human graders. Previous work examined Gaussian Process (GP) graders which, though successful, do not scale well with large data sets. Deep Neural Networks (DNN) may also be used to provide uncertainty using Monte-Carlo Dropout (MCD). This paper proposes a novel method to yield uncertainty and compares it to GPs and DNNs with MCD. The proposed approach explicitly teaches a DNN to have low uncertainty on training data and high uncertainty on generated artificial data. On experiments conducted on data from the Business Language Testing Service (BULATS), the proposed approach is found to outperform GPs and DNNs with MCD in uncertainty-based rejection whilst achieving comparable grading performance.
Year
DOI
Venue
2017
10.18653/v1/P17-2008
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 2
Field
DocType
Volume
Data set,Grading (education),Computer science,Computational linguistics,Global Positioning System,Gaussian process,Natural language processing,Artificial intelligence,Language assessment,Deep learning,Spoken language,Machine learning
Conference
P17-2
Citations 
PageRank 
References 
3
0.40
5
Authors
4
Name
Order
Citations
PageRank
Andrey Malinin1397.54
Anton Ragni2989.06
Kate Knill324928.02
Mark J. F. Gales43905367.45