Title
Comparison Of Decoding Strategies For Ctc Acoustic Models
Abstract
Connectionist Temporal Classification has recently attracted a lot of interest as it offers an elegant approach to building acoustic models (AMs) for speech recognition. The CTC loss function maps an input sequence of observable feature vectors to an output sequence of symbols. Output symbols are conditionally independent of each other under CTC loss, so a language model (LM) can be incorporated conveniently during decoding, retaining the traditional separation of acoustic and linguistic components in ASR.For fixed vocabularies, Weighted Finite State Transducers provide a strong baseline for efficient integration of CTC AMs with n-gram LMs. Character-based neural LMs provide a straight forward solution for open vocabulary speech recognition and all-neural models, and can be decoded with beam search. Finally, sequence-to-sequence models can be used to translate a sequence of individual sounds into a word string.We compare the performance of these three approaches, and analyze their error patterns, which provides insightful guidance for future research and development in this important area.
Year
DOI
Venue
2017
10.21437/Interspeech.2017-1683
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION
Keywords
DocType
Volume
automatic speech recognition, character based language models, decoding, neural networks
Conference
abs/1708.04469
ISSN
Citations 
PageRank 
2308-457X
3
0.43
References 
Authors
14
7
Name
Order
Citations
PageRank
Thomas Zenkel161.50
Ramon Sanabria284.90
Florian Metze31069106.49
Jan Niehues425939.48
Matthias Sperber52811.20
Sebastian Stüker616232.58
alex waibel7103.92