•  
  •  
 

Document Type

Article

Abstract

Automated decision-making has become widespread in recent years, largely due to advances in machine learning. As a result of this trend, machine learning systems are increasingly used to make decisions in high-stakes domains, such as employment or university admissions. The weightiness of these decisions has prompted the realization that, like humans, machines must also comply with the law. But human decision- making processes are quite different from automated decisionmaking processes, which creates a mismatch between laws and the decision makers to which they are intended to apply. In turn, this mismatch can lead to counterproductive outcomes. We take antidiscrimination laws in employment as a case study, with a particular focus on Title VII of the Civil Rights Act of 1964. A common strategy for mitigating bias in employment decisions is to "blind" human decision makers to the sensitive attributes of the applicants, such as race. The same strategy can also be used in an automated decision-making context by blinding the machine learning system to the race of the applicants (strategy 1). This strategy seems to comply with Title VII, but it does not necessarily mitigate bias because machine learning systems are adroit at using proxies for race if available. An alternative strategy is to not blind the system to race (strategy 2), thereby allowing it to use this information to mitigate bias. However, although preferable from a machine learning perspective, this strategy appears to violate Title VII. We contend that this conflict between strategies 1 and 2 highlights a broader legal and policy challenge, namely, that laws designed to regulate human behavior may not be appropriate when stretched to apply to machines. Indeed, they may even be detrimental to the very people that they were designed to protect. Although scholars have explored legal arguments in an attempt to press strategy 2 into compliance with Title VII, we believe there lies a middle ground between strategies 1 and 2 that involves partial blinding-that is, blinding the system to race only during deployment and not during training (strategy 3). We present strategy 3 as a "Goldilocks" solution for discrimination in employment decisions (as well as other domains), because it allows for the mitigation of bias while still complying with Title VII. Ultimately, any solution to the general problem of stretching human laws to apply to machines must be sociotechnical in nature, drawing on work in both machine learning and the law. This is borne out in strategy 3, which involves innovative work in machine learning (viz. the development of disparate learning processes) and creative legal analysis (viz. analogizing strategy 3 to legally accepted auditing procedures).

Share

COinS