Abstract | ||
---|---|---|
We propose high-order hidden Markov models (HO-HMM) to capture the duration and dynamics of speech signal. In the proposed model, both the state transition probability and the output observation probability depend not only on the current state but also on several previous states. An extended Viterbi algorithm was developed to train model and recognize speech. The performance of the HO-HMM was investigated by conducting experiments on speaker independent Mandarin digits recognition. From the experimental results, we find that as the order of HO-HMM increases, the number of error reduces. We also find that systems with both high-order state transition probability distribution and output observation probability distribution outperform systems with only high-order state transition probability distribution. |
Year | DOI | Venue |
---|---|---|
2006 | 10.1007/11779568_74 | IEA/AIE |
Keywords | Field | DocType |
markov model,ho-hmm increase,output observation probability distribution,speech recognition,speech signal,previous state,state transition probability,output observation probability,high-order hidden markov model,high-order state transition probability,current state,hidden markov model,state transition,probability distribution,viterbi algorithm | Forward algorithm,Markov property,Computer science,Markov model,Speech recognition,Probability distribution,Speaker recognition,Hidden Markov model,Viterbi algorithm,Hidden semi-Markov model | Conference |
Volume | ISSN | ISBN |
4031 | 0302-9743 | 3-540-35453-0 |
Citations | PageRank | References |
10 | 0.90 | 6 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lee-Min Lee | 1 | 46 | 8.10 |
Jia-Chien Lee | 2 | 10 | 0.90 |