•  
  •  
 

Document Type

Article

Abstract

There has been an explosion of concern about the use of computers to make decisions affecting humans, from hiring to lending approvals to setting prison terms. Many have pointed out that using computer programs to make these decisions may result in the propagation of biases or otherwise lead to undesirable outcomes. Many have called for increased transparency and others have called for algorithms to be tuned to produce more racially balanced outcomes. Attention to the problem is likely to grow as computers make increasingly important and sophisticated decisions in our daily lives. Drawing on both the computer science and legal literature on algorithmic fairness, this paper makes four major contributions to the debate over algorithmic discrimination. First, it provides a legal response to a recent flurry of work in computer science seeking to incorporate "fairness" in algorithmic decision-makers by demonstrating that legal rules generally apply in the form of side constraints, not fairness functions that can be optimized. Second, by looking at the problem through the lens of discrimination law, the paper recognizes that the problems posed by computational decisionmakers closely resemble the historical, institutional discrimination that discrimination law has evolved to control, a response to the claim that this problem is truly novel because it involves computerized decision-making. Third, the paper responds to calls for transparency in computational decision-making by demonstrating how transparency is unnecessary to providing accountability and that discrimination law itself provides a model for how to deal with cases of unfair algorithmic discrimination, with or without transparency. Fourth, the paper addresses a problem that has divided the literature on the topic: how to correct for discriminatory results produced by algorithms. Rather than seeing the problem as a binary one, I offer a third way, one that disaggregates the process of correcting algorithmic decision-makers into two separate decisions: a decision to reject an old process and a separate decision to adopt a new one. Those two decisions are subject to different legal requirements, providing added flexibility to firms and agencies seeking to avoid the worst kinds of discriminatory outcomes. Examples of disparate outcomes generated by algorithms combined with the novelty of computational decision-making are prompting many to push for new regulations to require algorithmic fairness. But, in the end, current discrimination law provides most of the answers for the wide variety of fairness-related claims likely to arise in the context of computational decision-makers, regardless of the specific technology underlying them.

Included in

Law Commons

Share

COinS