Monday, June 7, 2021

New Article: "Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments" -- by Prof. Sonia Gipson Rankin

Professor Sonia Gipson Rankin of the University of New Mexico School of Law has published Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments in the Washington and Lee Law Review.  Below is the abstract, and the full article is available right here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3662761

Abstract

Issues of racial inequality and violence are front and center in today’s society, as are issues surrounding artificial intelligence (AI). This Article, written by a law professor who is also a computer scientist, takes a deep dive into understanding how and why hacked and rogue AI creates unlawful and unfair outcomes, particularly for persons of color.

Black Americans are disproportionally featured in criminal justice, and their stories are obfuscated. The seemingly endless back-to-back murders of George Floyd, Breonna Taylor, and Ahmaud Arbery, and heartbreakingly countless others have finally shaken the United States from its slumbering journey towards intentional criminal justice reform. Myths about Black crime and criminals are embedded in the data collected by AI and do not tell the truth of race and crime. However, the number of Black people harmed by hacked and rogue AI will dwarf all historical records, and the gravity of harm is incomprehensible.

The lack of technical transparency and legal accountability leaves wrongfully convicted defendants without legal remedies if they are unlawfully detained based on a cyberattack, faulty or hacked data, or rogue AI. Scholars and engineers acknowledge that the artificial intelligence that is giving recommendations to law enforcement, prosecutors, judges, and parole boards lacks the common sense of an 18-month-old child. This Article reviews the ways AI is used in the legal system and the courts’ response to this use. It outlines the design schemes of proprietary risk assessment instruments used in the criminal justice system, outlines potential legal theories for victims, and provides recommendations for legal and technical remedies to victims of hacked data in criminal justice risk assessment instruments. It concludes that, with proper oversight, AI can increase fairness in the criminal justice system, but without this oversight, AI-based products will further exacerbate the extinguishment of liberty interests enshrined in the Constitution.

According to anti-lynching advocate Ida B. Wells-Barnett, “The way to right wrongs is to turn the light of truth upon them.” Thus, transparency is vital to safeguarding equity through AI design and must be the first step. The Article seeks ways to provide that transparency, for the benefit of all America, but particularly persons of color who are far more likely to be impacted by AI deficiencies. It also suggests legal reforms that will help plaintiffs recover when AI goes rogue.

https://lawprofessors.typepad.com/racelawprof/2021/06/new-article-technological-tethereds-potential-impact-of-untrustworthy-artificial-intelligence-in-cri.html

| Permalink

Comments

Post a comment