Abstract | ||
---|---|---|
In this paper, in order to properly evaluate the relative importance of priors and observed data in the Bayesian framework, we propose an extended Gaussian mixture model (EGMM) and design the corresponding learning inference algorithms. First, we define the likelihood function of the EGMM and then propose the variational learning algorithm for this EGMM. Moreover, the proposed model and approach are applied to speaker recognition. Experimental results demonstrate that this new approach generalizes the traditional GMM, offering a more powerful performance. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1109/ICCChina.2014.7008278 | Communications in China |
Keywords | Field | DocType |
gaussian processes,inference mechanisms,learning (artificial intelligence),maximum likelihood estimation,mixture models,speaker recognition,bayesian framework,egmm,extended gaussian mixture model,inference algorithm,variational learning algorithm,algorithm design and analysis,data models,accuracy,speech | Data modeling,Likelihood function,Algorithm design,Inference,Computer science,Algorithm,Speaker recognition,Prior probability,Mixture model,Bayesian probability | Conference |
Citations | PageRank | References |
0 | 0.34 | 4 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xin Wei | 1 | 26 | 11.66 |
Jianxin Chen | 2 | 77 | 18.83 |
Lei Wang | 3 | 12 | 5.19 |
Jingwu Cui | 4 | 112 | 17.70 |
Baoyu Zheng | 5 | 1008 | 82.73 |