Title
Measuring the Biases that Matter - The Ethical and Casual Foundations for Measures of Fairness in Algorithms.
Abstract
Measures of algorithmic bias can be roughly classified into four categories, distinguished by the conditional probabilistic dependencies to which they are sensitive. First, measures of "procedural bias" diagnose bias when the score returned by an algorithm is probabilistically dependent on a sensitive class variable (e.g. race or sex). Second, measures of "outcome bias" capture probabilistic dependence between class variables and the outcome for each subject (e.g. parole granted or loan denied). Third, measures of "behavior-relative error bias" capture probabilistic dependence between class variables and the algorithmic score, conditional on target behaviors (e.g. recidivism or loan default). Fourth, measures of "score-relative error bias" capture probabilistic dependence between class variables and behavior, conditional on score. Several recent discussions have demonstrated a tradeoff between these different measures of algorithmic bias, and at least one recent paper has suggested conditions under which tradeoffs may be minimized. In this paper we use the machinery of causal graphical models to show that, under standard assumptions, the underlying causal relations among variables forces some tradeoffs. We delineate a number of normative considerations that are encoded in different measures of bias, with reference to the philosophical literature on the wrongfulness of disparate treatment and disparate impact. While both kinds of error bias are nominally motivated by concern to avoid disparate impact, we argue that consideration of causal structures shows that these measures are better understood as complicated and unreliable measures of procedural biases (i.e. disparate treatment). Moreover, while procedural bias is indicative of disparate treatment, we show that the measure of procedural bias one ought to adopt is dependent on the account of the wrongfulness of disparate treatment one endorses. Finally, given that neither score-relative nor behavior-relative measures of error bias capture the relevant normative considerations, we suggest that error bias proper is best measured by score-based measures of accuracy, such as the Brier score.
Year
DOI
Venue
2019
10.1145/3287560.3287573
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY
Keywords
DocType
Citations 
Algorithmic decision-making,fairness,casual inference,discrimination
Conference
2
PageRank 
References 
Authors
0.37
0
2
Name
Order
Citations
PageRank
Bruce Glymour181.53
Jonathan Herington220.37