Abstract | ||
---|---|---|
We investigate why discretization can be effective in naive-Bayes learning. We prove a theorem that identifies particular conditions under which discretization will result in naive-Bayes classifiers delivering the same probability estimates as would be obtained if the correct probability density functions were employed. We discuss the factors that might affect naive-Bayes classification error under discretization. We suggest that the use of different discretization techniques can affect the classification bias and variance of the generated classifiers. We argue that by properly managing discretization bias and variance, we can effectively reduce naive-Bayes classification error. |
Year | DOI | Venue |
---|---|---|
2003 | 10.1007/978-3-540-24581-0_37 | AI 2003: ADVANCES IN ARTIFICIAL INTELLIGENCE |
Keywords | Field | DocType |
naive bayes,probability density function,naive bayes classifier | Discrete mathematics,Discretization,Discretization error,Pattern recognition,Naive Bayes classifier,Error tolerance,Computer science,Algorithm,Artificial intelligence,Probability density function,Decision boundary,Discretization of continuous features | Conference |
Volume | ISSN | Citations |
2903 | 0302-9743 | 40 |
PageRank | References | Authors |
2.03 | 31 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ying Yang | 1 | 206 | 10.51 |
Geoffrey I. Webb | 2 | 3130 | 234.10 |