Abstract | ||
---|---|---|
We describe an efficient procedure for automatic repair of quickly transcribed (QT) speech. QT speech, typically closed captioned data from television broadcasts, usually has a signifi- cant number of deletions and misspellings, and has a character- istic absence of disfluencies such as filled pauses (for examp le, um, uh). Errors of these kinds often throw an acoustic model training program out of alignment and make it hard for it to resynchronize. At best the erroneous utterance is discarde d and does not benefit the training procedure. At worst, it could mi s- align and end up sabotaging the training data. The procedure we propose in this paper aims to cleanse such quick transcriptions so that they align better with the acoustic evidence and thus pro- vide for better acoustic models for automatic speech recognition (ASR). Results from comparing our transcripts with those from careful transcriptions on the same corpus, and from comparable state-of-the-art methods are also presented and discussed . |
Year | Venue | DocType |
---|---|---|
2004 | INTERSPEECH | Conference |
Citations | PageRank | References |
13 | 1.07 | 2 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Anand Venkataraman | 1 | 218 | 21.26 |
Andreas Stolcke | 2 | 6690 | 712.46 |
Wen Wang | 3 | 327 | 29.31 |
Dimitra Vergyri | 4 | 373 | 36.97 |
Jing Zheng | 5 | 442 | 43.00 |
Venkata Ramana Rao Gadde | 6 | 188 | 15.83 |