Title
A comparative study of fairness-enhancing interventions in machine learning.
Abstract
Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
Year
DOI
Venue
2019
10.1145/3287560.3287589
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY
Keywords
DocType
Volume
Fairness-aware machine learning,benchmarks
Conference
abs/1802.04422
Citations 
PageRank 
References 
24
0.79
19
Authors
6
Name
Order
Citations
PageRank
Sorelle A. Friedler129324.26
Carlos E. Scheidegger258430.83
Suresh Venkatasubramanian32675190.15
Sonam Choudhary4240.79
Evan P. Hamilton5240.79
Derek Roth6240.79