Abstract | ||
---|---|---|
In this paper we present an extension to the classical k-dependence Bayesian network classifier algorithm. The original method intends to work for the whole continuum of Bayesian classifiers, from naïve Bayes to unrestricted networks. In our experience, it performs well for low values of k. However, the algorithm tends to degrade in more complex spaces, as it greedily tries to add k dependencies to all feature nodes of the resulting net. We try to overcome this limitation by seeking for optimal values of k on a feature per feature basis. At the same time, we look for the best feature ordering. That is, we try to estimate the joint probability distribution of optimal feature orderings and individual number of dependencies. We feel that this preserves the essence of the original algorithm, while providing notable performance improvements. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1145/2001576.2001741 | GECCO |
Keywords | Field | DocType |
classifier algorithm,best feature,optimal value,bayesian classifier,k dependency,classical k-dependence bayesian network,optimal feature ordering,k-dependence bayesian network classifier,feature node,feature basis,flexible learning,original algorithm,probability distribution,estimation of distribution algorithm | Variable-order Bayesian network,Joint probability distribution,Naive Bayes classifier,Estimation of distribution algorithm,Computer science,Bayesian network classifier,Bayesian network,Artificial intelligence,Bayesian statistics,Machine learning,Bayesian probability | Conference |
Citations | PageRank | References |
0 | 0.34 | 14 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Arcadio Rubio | 1 | 0 | 0.34 |
José Antonio Gámez | 2 | 16 | 2.49 |