CrimProf Blog

Editor: Kevin Cole
Univ. of San Diego School of Law

Monday, September 26, 2022

Green et al. on AI and Public Safety Assessment

Ben GreenYaniv YacobyChristopher L. Griffin and Finale Doshi-Velez (University of Michigan at Ann Arbor - Society of Fellows, Harvard University - John A. Paulson School of Engineering and Applied Sciences, University of Arizona - James E. Rogers College of Law and Harvard University - John A. Paulson School of Engineering and Applied Sciences) have posted “If it Didn't Happen, Why Would I Change My Decision?”: How Judges Respond to Counterfactual Explanations for the Public Safety Assessment on SSRN. Here is the abstract:
Many researchers and policymakers have expressed excitement about algorithmic explanations enabling more fair and responsible decision-making. However, recent experimental studies have found that explanations do not always improve human use of algorithmic advice. In this study, we shed light on how people interpret and respond to counterfactual explanations (CFEs) explanations that show how a model’s output would change with marginal changes to its input(s)—in the context of pretrial risk assessment instruments (PRAIs). We ran think-aloud trials with eight sitting U.S. state court judges, providing them with recommendations from a PRAI that includes CFEs. We found that the CFEs did not alter the judges’ decisions. At first, judges misinterpreted the counterfactuals as real—rather than hypothetical—changes to defendants. Once judges understood what the counterfactuals meant, they ignored them, stating their role is only to make decisions regarding the actual defendant in question. The judges also expressed a mix of reasons for ignoring or following the advice of the PRAI without CFEs. These results add to the literature detailing the unexpected ways in which people respond to algorithms and explanations. They also highlight new challenges associated with improving human-algorithm collaborations through explanations.

| Permalink


Post a comment