Wednesday, November 30, 2022
Brandon L. Garrett and Cynthia Rudin (Duke University School of Law and Duke University - Pratt School of Engineering) have posted Glass Box Artificial Intelligence in Criminal Justice on SSRN. Here is the abstract:
As we embrace data-driven technologies across a wide range of human activities, policymakers and researchers increasingly sound alarms regarding the dangers posed by “black box” uses of artificial intelligence (AI) to society, democracy, and individual rights. Such models are either too complex for people to understand or they are designed so that their functioning is inaccessible. This lack of transparency can have harmful consequences for the people affected. One central area of concern has been the criminal justice system, in which life, liberty, and public safety can be at stake. Judges have struggled with government claims that AI, such as that used in DNA mixture interpretation, risk assessments, facial recognition, and predictive policing, should remain a black box that is not disclosed to the defense and in court. Both the champions and critics of AI have argued we face a central trade-off: black box AI sacrifices interpretability for predictive accuracy. We write to counter this black box myth. We describe a body of computer science research showing “glass box” AI that is interpretable can be more accurate.