Published by the Students of Johns Hopkins since 1896
December 23, 2024

Artificial intelligence poses serious risks in the criminal justice system

By ANUSHA RAO | September 13, 2020

prison-553836_1920
PUBLIC DOMAIN The susceptibility of predictive artificial intelligence to racial biases makes its use dangerous in the criminal justice system.

Whenever I tell people that I’m interested in artificial intelligence (AI), most of them bring up their favorite movie that features an evil AI assembling an army of killer robots that threaten to wipe out humankind. I have to admit that I used to be right there with them, but as entertaining and enjoyable as they are, they lead to a lot of misconceptions about what AI truly is and the very real ways that it impacts our lives.

In the first two decades of the 21st century, the boom of advanced machine learning techniques and big data revolutionized modern computing. Highly capable AI has now infiltrated its way into nearly every field imaginable: medicine, finance, agriculture, manufacturing, the military and more. Rather than sentient beings, AI has taken form as complex algorithms — ones that can diagnose breast cancer from mammograms more accurately than trained radiologists or detect DNA mutations in tumor gene sequences.

Now more than ever, AI has an enormous capability to impact people’s lives in a meaningful and substantial way. But it also raises multidimensional questions that simply don’t have easy answers.

In Steven Spielberg’s Minority Report, Tom Cruise leads Washington’s elite PreCrime Unit, a section of the police department solely dedicated to interpreting knowledge given by the Precogs, three psychics that forecast crimes like murder and robberies. It showcases the dangers of a world where police use psychic technology to punish people before they commit a crime. At first I enjoyed it as another entertaining, albeit thought-provoking, science fiction tech-noir film. But as a tech junkie, I soon learned just how relevant the nearly 20-year-old film is.

One of the areas where AI is currently being implemented is at the intersection of law, government, policing and social issues like race: the criminal justice system.

Over the last few months, the rise of the Black Lives Matter movement and a renewed scrutiny of race relations, policing and structural biases have brought the issues of the American criminal justice system to light. While it’s an institution and system that claims to have been founded on the principle of fairness and justice for all, it is riddled with biases that disproportionately affect Black and brown Americans. From deeply flawed societal constructs that perpetuate injustice to discriminatory police, attorneys and judges, bias is one of the biggest issues that afflict our criminal judicial system.

Criminal risk assessment algorithms are tools that have been designed to predict a defendant’s future risk for misconduct, whether that’s the likelihood that they will reoffend or the likelihood that they will show up to trial. They are the most commonly used form of AI in the justice system, employed across the country.

After taking in numerous types of data about the defendant like age, sex, socioeconomic status, family background and employment status, they reach a “prediction” of an individual's risk, spitting out a specific percentage that indicates how likely they are to reoffend. These figures have been used to set bail, determine sentences and even contribute to determinations of guilt or innocence.

Their biggest selling point is that they are objective — those that favor their use tout the impartiality and unbiased nature of mathematical code. While a judge could be affected by emotions and serve harsher punishments, an algorithm would never fall prey to such an inappropriate and human flaw.

Unfortunately, like many other forms of AI, they become subject to the seemingly uncontrollable issue regarding bias. The biggest source of bias in AI is bad training data. Modern-day risk assessment tools are driven by algorithms trained on historical crime data, using statistical methods to find patterns and connections. If an algorithm is trained on historical crime data, then it will pick out patterns associated with crime, but patterns are correlations, not causations.

More often than not, these patterns represent existing issues in the policing and justice system. For example, if an algorithm found that lower income was correlated with high recidivism, it would give defendants that come from low-income backgrounds a higher score. The very populations that have been targeted by law enforcement, like impoverished and minority communities, are at risk of higher scores which label them as “more likely” to commit crimes. These scores are then presented to a judge who uses them to make decisions regarding bail and sentencing. 

The artificial learning methodology amplifies and perpetuates biases by generating even more biased data to feed the algorithms, creating a cycle coupled with a lack of accountability, since for many algorithms it’s difficult to understand how they came to their decisions. In 2018, leadings civil rights groups, including the National Association for the Advancement of Colored People and American Civil Liberties Union, signed a letter raising concerns about the use of this type of AI in pretrial assessments. 

The very idea that we can reduce complex human beings, people who deserve to be seen as a person first and foremost, down to a number is appalling. As a society we treat those who have been incarcerated as waste-aways, promoting revenge and punishment over rehabilitation. We make it nearly impossible for people to return to a normal life, ripping away their right to vote in many states and hurting chances of employment. Putting a number on people’s heads adds to the already rampant dehumanization of minority communities in this country.

I typically find fear of technology rooted in a deep misunderstanding of how it actually works and its ability to impact our lives. AI is a valuable tool that has many practical and ethical applications, but the use of risk assessment tools in the criminal justice system perpetuates racial biases and should be outlawed immediately.

Anusha Rao is a freshman studying Cognitive Science from Washington, D.C. She’s part of the Artificial Intelligence Society at Hopkins.


Have a tip or story idea?
Let us know!

News-Letter Magazine
Multimedia
Hoptoberfest 2024
Leisure Interactive Food Map