Abstract | ||
---|---|---|
We show that, given data from a mixture of k well-separated spherical Gaussians in ℜd, a simple two-round variant of EM will, with high probability, learn the parameters of the Gaussians to near-optimal precision, if the dimension is high (d ln k). We relate this to previous theoretical and empirical work on the EM algorithm. |
Year | Venue | Keywords |
---|---|---|
2007 | Journal of Machine Learning Research | probabilistic analysis,mixtures of gaussians,spherical gaussians,high probability,empirical work,simple two-round variant,ln k,expectation maximization,unsupervised learning,em algorithm,clustering,mixture of gaussians |
Field | DocType | Volume |
Pattern recognition,Expectation–maximization algorithm,Probabilistic analysis of algorithms,Artificial intelligence,Mathematics,Machine learning | Journal | 8, |
ISSN | Citations | PageRank |
1532-4435 | 48 | 2.67 |
References | Authors | |
9 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sanjoy Dasgupta | 1 | 2052 | 172.00 |
Leonard J. Schulman | 2 | 1328 | 136.88 |