Title
Expert And Crowdsourced Annotation Of Pronunciation Errors For Automatic Scoring Systems
Abstract
This paper evaluates and compares different approaches to collecting judgments about pronunciation accuracy of non-native speech. We compare the common approach, which requires expert linguists to provide a detailed phonetic transcription of non-native English speech, with word-level judgments collected from multiple naive listeners using a crowdsourcing platform. In both cases we found low agreement between annotators on what words should be marked as errors. We compare the error detection task to a simple transcription task in which the annotators were asked to transcribe the same fragments using standard English spelling. We argue that the transcription task is a simpler and more practical way of collecting annotations which also leads to more valid data for training an automatic scoring system.
Year
Venue
Keywords
2015
16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5
pronunciation error detection, annotation, crowd-sourcing, educational applications, second language acquisition, tutoring systems
Field
DocType
Citations 
Pronunciation,Annotation,Phonetic transcription,Computer science,Crowdsourcing,Speech recognition,Error detection and correction,Natural language processing,Spelling,Artificial intelligence,Standard English,Scoring system
Conference
0
PageRank 
References 
Authors
0.34
11
5
Name
Order
Citations
PageRank
Anastassia Loukina12410.47
Melissa Lopez2192.64
Keelan Evanini37920.23
David Suendermann-Oeft432.17
Klaus Zechner553466.55