Abstract | ||
---|---|---|
Latent tree (LT) models are a special class of Bayesian networks that can be used for cluster analysis, latent structure discovery and density estimation. A number of search-based algorithms for learning LT models have been developed. In particular, the HSHC algorithm by [1] and the EAST algorithm by [2] are able to deal with data sets with dozens to around 100 variables. Both HSHC and EAST aim at finding the LT model with the highest BIC score. However, they use another criterion called the cost-effectiveness principle when selecting among some of the candidate models during search. In this paper, we investigate whether and why this is necessary. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1007/978-3-642-25655-4_20 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Keywords | Field | DocType |
search-based learning,operation granularity,search-based algorithm,bayesian network,hshc algorithm,lt model,cost-effectiveness principle,east algorithm,candidate model,latent structure discovery,latent tree,latent tree model,cluster analysis | Density estimation,Data mining,Data set,Latent class model,Bayesian network,Artificial intelligence,Granularity,Machine learning,Mathematics | Conference |
Volume | Issue | ISSN |
6797 LNAI | null | 16113349 |
Citations | PageRank | References |
0 | 0.34 | 8 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Tao Chen | 1 | 76 | 7.04 |
Nevin .L Zhang | 2 | 895 | 97.21 |
Yi Wang | 3 | 64 | 5.86 |