Abstract | ||
---|---|---|
Discriminative training of the feature space using the minimum phone error objective function has been shown to yield remarkable accuracy improvements. These gains, however, come at a high cost of memory. In this paper we present techniques that maintain fMPE performance while reducing the required memory by approximately 94%. This is achieved by designing a quantization methodology which minimizes the error between the true fMPE computation and that produced with the quantized parameters. Also illustrated is a Viterbi search over the allocation of quantization levels, providing a framework for optimal non-uniform allocation of quantization levels over the dimensions of the fMPE feature vector. This provides an additional 8% relative reduction in required memory with no loss in recognition accuracy. |
Year | Venue | Keywords |
---|---|---|
2009 | INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5 | Discriminative training, Quantization, Viterbi |
Field | DocType | Citations |
Feature vector,Pattern recognition,Computer science,Speech recognition,Artificial intelligence,Discriminative model | Conference | 0 |
PageRank | References | Authors |
0.34 | 1 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Etienne Marcheret | 1 | 100 | 11.15 |
Jiayu Chen | 2 | 78 | 17.64 |
Petr Fousek | 3 | 129 | 12.20 |
Peder A. Olsen | 4 | 398 | 37.80 |
Vaibhava Goel | 5 | 376 | 41.25 |