Title
Approximate Heavily-Constrained Learning with Lagrange Multiplier Models.
Abstract
In machine learning applications such as ranking fairness or fairness over intersectional groups, one often encounters optimization problems with an extremely large number of constraints. In particular, with ranking fairness tasks, there may even be a variable number of constraints, e.g. one for each query in the training set. In these cases, the standard approach of optimizing a Lagrangian while maintaining one Lagrange multiplier per constraint may no longer be practical. Our proposal is to associate a feature vector with each constraint, and to learn a ``multiplier model’’ that maps each such vector to the corresponding Lagrange multiplier. We prove optimality, approximate feasibility and generalization guarantees under assumptions on the flexibility of the multiplier model, and empirically demonstrate that our method is effective on real-world case studies.
Year
Venue
DocType
2020
NeurIPS
Conference
Volume
Citations 
PageRank 
33
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Narasimhan, Harikrishna116117.48
Andrew Cotter285178.35
Zhou, Yichen300.34
Serena Wang444.09
Guo Wenshuo501.35