Title
Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals.
Abstract
We show that many machine learning goals, such as improved fairness metrics, can be expressed as constraints on the modelu0027s predictions, which we call rate constraints. We study the problem of training non-convex models subject to these rate constraints (or any non-convex and non-differentiable constraints). In the non-convex setting, the standard approach of Lagrange multipliers may fail. Furthermore, if the constraints are non-differentiable, then one cannot optimize the Lagrangian with gradient-based methods. To solve these issues, we introduce the proxy-Lagrangian formulation. This new formulation leads to an algorithm that produces a stochastic classifier by playing a two-player non-zero-sum game solving for what we call a semi-coarse correlated equilibrium, which in turn corresponds to an approximately optimal and feasible solution to the constrained optimization problem. We then give a procedure which shrinks the randomized solution down to one that is a mixture of at most $m+1$ deterministic solutions, given $m$ constraints. This culminates in algorithms that can solve non-convex constrained optimization problems with possibly non-differentiable and non-convex constraints with theoretical guarantees. We provide extensive experimental results enforcing a wide range of policy goals including different fairness metrics, and other goals on accuracy, coverage, recall, and churn.
Year
Venue
Field
2018
arXiv: Learning
Correlated equilibrium,Mathematical optimization,Lagrangian,Lagrange multiplier,Differentiable function,Constrained optimization problem,Classifier (linguistics),Recall,Mathematics
DocType
Volume
Citations 
Journal
abs/1809.04198
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Andrew Cotter185178.35
Heinrich Jiang220.73
Serena Wang344.09
Taman Narayan400.68
Maya R. Gupta551.12
Seungil You6396.79
Karthik Sridharan7114576.94