Wednesday, June 29, 2016
A data revolution is transforming the workplace. Employers are increasingly relying on algorithms to decide who gets interviewed, hired or promoted. Proponents of the new data science claim that automated decision systems can make better decisions faster, and are also fairer, because they replace biased human decision-makers with “neutral” data. However, data are not neutral and algorithms can discriminate. The legal world has not yet grappled with these challenges to workplace equality. The risks posed by data analytics call for fundamentally rethinking anti-discrimination doctrine. When decision-making algorithms produce biased outcomes, they may seem to resemble familiar disparate impact cases, but that doctrine turns out to be a poor fit. Developed in a different context, disparate impact doctrine fails to address the ways in which algorithms can introduce bias and cause harm. This Article argues instead for a plausible, revisionist interpretation of Title VII, in which disparate treatment and disparate impact are not the only recognized forms of discrimination. A close reading of the text suggests that Title VII also prohibits classification bias — namely, the use of classification schemes that have the effect of exacerbating inequality or disadvantage along the lines of race or other protected category. This description matches well the concerns raised by workplace analytics. Framing the problem in terms of classification bias leads to some quite different conclusions about how the anti-discrimination norm should be applied to algorithms, suggesting both the possibilities and limits of Title VII’s liability focused model.