Abstract | ||
---|---|---|
For resource-limited platforms, Subspace Distribution Clustering Hidden Markov Model (SDCHMM) is better than Continuous Density Hidden Markov Model (CDHMM) for its smaller storage and lower computations while maintaining a decent recognition performance. But the normal SDCHMM obtaining method doesn't ensure the optimality in classifier design. In order to obtain an optimal classifier, a new SDCHMM training algorithm that adjusts the parameters of SDCHMM according to Minimum Classification Error (MCE) criterion is proposed in this paper. Our experimental results on TiDigits and RM tasks show the MCE-based SDCHMM training algorithm provides 15-80% Word Error Rate Reduction (WERR) compared with the normal SDCHMM that is converted from CDHMM. |
Year | DOI | Venue |
---|---|---|
2004 | 10.1109/CHINSL.2004.1409599 | 2004 International Symposium on Chinese Spoken Language Processing, Proceedings |
Keywords | Field | DocType |
statistical distributions,hidden markov models,word error rate,speech recognition,hidden markov model | Pattern recognition,Subspace topology,Markov model,Computer science,Word error rate,Speech recognition,Probability distribution,Artificial intelligence,Classifier (linguistics),Cluster analysis,Hidden Markov model,Computation | Conference |
Citations | PageRank | References |
0 | 0.34 | 4 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xiao-Bing Li | 1 | 4 | 1.13 |
Li-Rong Dai | 2 | 1070 | 117.92 |
Ren-Hua Wang | 3 | 344 | 41.36 |