Title
Compacting Discriminative Feature Space Transforms For Embedded Devices
Abstract
Discriminative training of the feature space using the minimum phone error objective function has been shown to yield remarkable accuracy improvements. These gains, however, come at a high cost of memory. In this paper we present techniques that maintain fMPE performance while reducing the required memory by approximately 94%. This is achieved by designing a quantization methodology which minimizes the error between the true fMPE computation and that produced with the quantized parameters. Also illustrated is a Viterbi search over the allocation of quantization levels, providing a framework for optimal non-uniform allocation of quantization levels over the dimensions of the fMPE feature vector. This provides an additional 8% relative reduction in required memory with no loss in recognition accuracy.
Year
Venue
Keywords
2009
INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5
Discriminative training, Quantization, Viterbi
Field
DocType
Citations 
Feature vector,Pattern recognition,Computer science,Speech recognition,Artificial intelligence,Discriminative model
Conference
0
PageRank 
References 
Authors
0.34
1
5
Name
Order
Citations
PageRank
Etienne Marcheret110011.15
Jiayu Chen27817.64
Petr Fousek312912.20
Peder A. Olsen439837.80
Vaibhava Goel537641.25