Title
Human annotation of ASR error regions: Is "gravity" a sharable concept for human annotators?
Abstract
This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the "seriousness" of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not.
Year
Venue
Keywords
2014
LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
Annotation,ASR Seriousness Errors,Speech Recognition
Field
DocType
Citations 
Edit distance,Annotation,Computer science,Speech recognition,Artificial intelligence,Natural language processing,Seriousness
Conference
0
PageRank 
References 
Authors
0.34
9
10
Name
Order
Citations
PageRank
Daniel Luzzati1264.02
Cyril Grouin217030.22
Ioana Vasilescu36416.01
Martine Adda-Decker436067.37
Eric Bilinski5579.39
Nathalie Camelin63914.29
juliette kahn710912.94
Carole Lailler811.37
L. Lamel92135361.63
Sophie Rosset1039361.66