Title
A One-Pass Real-Time Decoder Using Memory-Efficient State Network
Abstract
This paper presents our developed decoder which adopts the idea of statically optimizing part of the knowledge sources while handling the others dynamically. The lexicon, phonetic contexts and acoustic model are statically integrated to form a memory-efficient state network, while the language model (LM) is dynamically incorporated on the fly by means of extended tokens. The novelties of our approach for constructing the state network are (1) introducing two layers of dummy nodes to cluster the cross-word (CW) context dependent fan-in and fan-out triphones, (2) introducing a so-called “WI layer” to store the word identities and putting the nodes of this layer in the non-shared mid-part of the network, (3) optimizing the network at state level by a sufficient forward and backward node-merge process. The state network is organized as a multi-layer structure for distinct token propagation at each layer. By exploiting the characteristics of the state network, several techniques including LM look-ahead, LM cache and beam pruning are specially designed for search efficiency. Especially in beam pruning, a layer-dependent pruning method is proposed to further reduce the search space. The layer-dependent pruning takes account of the neck-like characteristics of WI layer and the reduced variety of word endings, which enables tighter beam without introducing much search errors. In addition, other techniques including LM compression, lattice-based bookkeeping and lattice garbage collection are also employed to reduce the memory requirements. Experiments are carried out on a Mandarin spontaneous speech recognition task where the decoder involves a trigram LM and CW triphone models. A comparison with HDecode of HTK toolkits shows that, within 1% performance deviation, our decoder can run 5 times faster with half of the memory footprint.
Year
DOI
Venue
2008
10.1093/ietisy/e91-d.3.529
IEICE Transactions
Keywords
Field
DocType
wi layer,state network,one-pass real-time decoder,memory-efficient state network,trigram lm,lm look-ahead,state level,lm cache,beam pruning,lm compression,layer-dependent pruning,real time,search space,language model,look ahead,garbage collection,context dependent
Triphone,Computer science,CPU cache,Cache,Trigram,Algorithm,Garbage collection,Memory footprint,Language model,Acoustic model
Journal
Volume
Issue
ISSN
E91-D
3
1745-1361
Citations 
PageRank 
References 
25
1.51
10
Authors
5
Name
Order
Citations
PageRank
Jian Shao126121.83
Ta Li2337.75
Qingqing Zhang310214.76
Qingwei Zhao48020.70
Yonghong Yan5656114.13