Abstract | ||
---|---|---|
most machine learning applications, classification accuracy is not the primary metric of interest. Binary classifiers which face class imbalance are often evaluated by the $F_beta$ score, area under the precision-recall curve, Precision at K, and more. The maximization of many of these metrics can be expressed as a constrained optimization problem, where the constraint is a function of the classifieru0027s predictions. In this paper we propose a novel framework for learning with constraints that can be expressed as a predicted positive rate (or negative rate) on a subset of the training data. We explicitly model the threshold at which a classifier must operate to satisfy the constraint, yielding a surrogate loss function which avoids the complexity of constrained optimization. The method is model-agnostic and only marginally more expensive than minimization of the unconstrained loss. Experiments on a variety of benchmarks show competitive performance relative to existing baselines. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Learning | Mathematical optimization,Ranking,Quantile function,Quantile,Artificial intelligence,Constrained optimization problem,Optimization problem,Mathematics,Machine learning,Estimator |
DocType | Volume | Citations |
Journal | abs/1803.00067 | 0 |
PageRank | References | Authors |
0.34 | 2 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alan Mackey | 1 | 4 | 0.73 |
Xiyang Luo | 2 | 17 | 5.09 |
Elad Eban | 3 | 29 | 4.86 |