Monday, April 16, 2018
Ric Simmons (Ohio State University (OSU) - Michael E. Moritz College of Law) has posted Big Data, Machine Judges, and the Legitimacy of the Criminal Justice System (University of California Davis Law Review, Vol. 52, 2018) on SSRN. Here is the abstract:
Predictive algorithms are rapidly spreading throughout the criminal justice system. They are used to more efficiently allocate police resources, identify potentially dangerous individuals, and advise judges at bail hearings and sentencing determinations. These algorithms have the potential to increase the accuracy, efficiency, and fairness of the criminal justice system, and they have been criticized on the grounds that they may reinforce pre-existing biases against minorities. But one aspect of these tools that has not yet been discussed in the literature is whether they will be accepted as legitimate. For centuries, these critical decisions that affect people’s safety and liberty have been made by human beings; now, for the first time in human history, we are delegating large aspects of these decisions to machines. This article addresses whether people will be willing to accept this change, and if not, how we can adapt the algorithms in order to make them more acceptable.
In determining whether predictive algorithms are likely to be accepted by criminal defendants, the article draws on the field of procedural justice, which sets out numerous factors that determine whether a participant in a judicial proceeding believes that the process is fair. The article finds that predictive algorithms do not fare particularly well on these factors: they may not be seen as trustworthy or neutral, and they do not give defendants a significant opportunity to participate in the process. The article suggests that criminal defendants would be more likely to view predictive algorithms as legitimate if the algorithms were made more transparent, and if they were designed to ensure that they did not use data that was tainted by past discriminatory practices.
The article then examines whether the general population is likely to perceive predictive algorithms as legitimate. It examines various psychological barriers that people have with regard to accepting predictive algorithms. The article presents an original empirical study of six hundred individuals who were presented with a hypothetical case in which a judge uses a predictive algorithm to assist in a bail hearing. The study indicates that individuals are likely to accept predictive algorithms, as long as certain criteria are met.