Abstract | ||
---|---|---|
In many applications, false positives (type I error) and false negatives (type II) have different impact. In medicine, it is not considered as bad to falsely diagnosticate someone healthy as sick (false positive) as it is to diagnosticate someone sick as healthy (false negative). But we are also willing to accept some rate of false negatives errors in order to make the classification task possible at all. Where the line is drawn is subjective and prone to controversy. Usually, this compromise is given by a cost matrix where an exchange rate between errors is defined. For many reasons, however, it might not be natural to think of this trade-off in terms of relative costs. We explore novel learning paradigms where this trade-off can be given in the form of the amount of false negatives we are willing to tolerate. The classifier then tries to minimize false positives while keeping false negatives within the acceptable bound. Here we consider classifiers based on kernel density estimation, gradient descent modifications and applying a threshold to classifying and ranking scores. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1007/978-3-319-59147-6_47 | ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2017, PT II |
Keywords | Field | DocType |
Classification costs,Cost matrix,Imbalance data,Kernel density estimation,Ranking,Gradient descent,Neural networks | Gradient descent,Cost matrix,Pattern recognition,Ranking,Computer science,Artificial intelligence,Type I and type II errors,Artificial neural network,Classifier (linguistics),Machine learning,False positive paradox,Kernel density estimation | Conference |
Volume | ISSN | Citations |
10306 | 0302-9743 | 2 |
PageRank | References | Authors |
0.38 | 6 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ricardo Cruz | 1 | 10 | 3.28 |
Kelwin Fernandes | 2 | 36 | 7.71 |
Joaquim Pinto Da Costa | 3 | 262 | 14.82 |
Jaime S. Cardoso | 4 | 543 | 68.74 |