The emerging applications of artificial intelligence promise huge social benefits: significant gains in detecting breast cancer that allow radiologists to focus on other patient needs, the deciphering of text inside a 2,000-year-old scorched scroll so carbonized that it cannot be unrolled, the development of new drugs through 3D protein structure prediction, and so on. But there are perils too, as ordinary people, especially low-income Americans, can attest to: AI systems that, usually with little explanation, deny home loans, set predatory rent prices, and reject employment applications, to name a few.
Yet nowhere is the use of AI as fraught as it is in the criminal justice system, where adverse decisions lead to starkly life-altering outcomes. This isn’t a hypothetical future concern: criminal justice automation of varying degrees has already arrived. Local police agencies can turn to predictive technologies to help direct patrol resources and even generate police reports. A number of jurisdictions use risk assessment tools to decide who should be detained pretrial. Others permit judges to use algorithmic tools to determine criminal sentences. Parole boards can rely on automated assessments to identify who should be eligible for parole.
So AI is already present in the criminal justice system. But how far should it go? In his Future Tense Fiction story “The 28th,” Mark Stasenko asks us to take the next logical step. What if automated systems weren’t just guides, but replacements for nearly every human in the process—including forensic analysts, defense attorneys, prosecutors, and even juries? In Stasenko’s provocative thought experiment, there are still human detectives engaged in investigations, but even they are under AI supervision, and almost every other criminal justice process has been delegated to AI agents. The detectives’ AI partner, Sybil, isn’t just a guide; “she” is a mandatory taskmaster, now backed up with the legal authority of a constitutional amendment. What would a world of nearly totally automated criminal cases look like—a Siri with coercive power?
What would a world of nearly totally automated criminal cases look like?
Most technologists, scholars, and civil rights advocates today would balk at the idea, and rightly so. The incursion of AI into the criminal justice system has already brought with it a series of controversies and problems. Predictive policing, touted in 2011 as one of the best inventions of the year and widely praised in the early 2010s, became so associated with hidden bias and inaccuracy that some cities eventually banned it. Automated gunshot detection software promises to revolutionize police resource allocation, but it turns out to be plagued with errors and inefficiencies. Facial recognition technologies might seem like a foolproof method of identification, until you hear the stories of Robert Julian-Borchak Williams and others who have been wrongfully arrested on the basis of faulty matches.
The underlying premise of “The 28th,” as is often the case with the most perceptive speculative fiction, seems entirely plausible. So disillusioned with the criminal justice system and its traditional, human-centered processes, the public backs a constitutional amendment to do away with juries of one’s peers in criminal trials. The goal is to rid the system of humans’ implicit biases and misguided emotion.
The story’s detectives, Alex and Elijah, work not with but for AI agents that analyze data, direct police work, process cases, and deliver verdicts. In the story, we see that each detective stands at seemingly opposing ends of the spectrum of responses to AI: Alex opts for blind trust, Elijah for rejection. Neither one is realistic or advisable.
Each detective stands at seemingly opposing ends of the spectrum of responses to AI: Alex opts for blind trust, Elijah for rejection. Neither one is realistic or advisable.
Elijah’s complete rejection of an AI-driven system, based on his nostalgic yearning for “intuitive” police work, papers over the many flaws of traditional policing and ignores the technological revolutions around him. The datafied world today requires some degree of automation to process the sheer amount of information out there; policing must adapt like every other institution in society. And as Elijah’s interventions—which I won’t spoil here—suggest, there is every reason to be skeptical of the “good old days” of human manipulation, bad faith, and error.
Alex’s unquestioning faith, for its part, is ill-conceived for different reasons. When he loses his wife to human error, he embraces all that an AI system promises because it appears to be everything that humans are not. For Alex, AI “eliminated” the “type of human error” that led to his wife’s death. But Alex fails to see that AI systems aren’t handed down by some infallible divinity; they are created by flawed human beings. Even worse, the workings of his AI partner Sybil’s judgments are opaque. We can’t know how Sybil resolves any criminal case because the process is hidden within a “black box.” The story’s 28th Amendment may have eliminated our own Sixth Amendment, but if due process is still a part of the future, defendants should shudder at the lack of transparency in the process.
There are other troubling aspects of Stasenko’s useful thought experiment. Sybil is not a government-created entity; these AI agents are produced by a private company whose own interests are presumably inclined toward market dominance and corporate secrecy, not public transparency or accountability. Sybil may be the dominant AI in criminal justice because she is the best program available—or because, as the story hints, her creators are the most politically savvy. Similar issues have already cropped up in our world. Some criminal defendants have been barred from accessing the algorithms used to convict them because private companies have invoked trade secret entitlements. Sybil’s creators may have few incentives to disclose the technology’s inner workings. That scenario feels far from just.
Alex realizes the follies of his blind faith in Sybil in the worst possible way: when he is unjustly accused of murder. And Elijah, the person who frees Alex from his AI-dictated fate, harbors his own flawed motivations. So what’s a rational person to do?
None of these extremes will do when it comes to the future of AI in the criminal justice system.
Part of the answer lies in verbal cues Stasenko has left within the story: the related concepts of faith and fate. Faith is what leaves Alex blind to Sybil’s faults. It is the loss of faith in the criminal justice system that led to the 28th Amendment. The fate of a defendant, once left to a human jury, is handed over to an all-encompassing AI agent. None of these extremes will do when it comes to the future of AI in the criminal justice system. Some AI tools will be beneficial to the criminal justice system, so long as there is oversight and accountability. As petabytes of police bodycam videos become the norm, for example, generative AI technologies can help already overburdened public defense attorneys review the data for their clients.
Policymakers must take the middle path when it comes to applying new technologies in the service of criminal justice: to weigh costs and benefits, to foreground human values like due process and justice, and to abjure any notion that AI will provide a solution to a system created by people that makes decisions of enormous consequence for people. And sometimes, society needs to say no. No to Sybil. Unfortunately, political support for such restraint may have receded. But one thing is clear: As “The 28th” warns us, AI is a tool, not an answer.