Artificial Intelligence in the Criminal Justice System

Court personnel and an android in the court building

Artificial intelligence (AI) has made its presence felt in the judicial system. It enables judges to leverage special AI-powered tools to assist them in making decisions with almost 100% accuracy, free of human bias.

Over the past few decades, artificial intelligence has shown itself capable of solving many of the world’s most pressing issues, as well as improving our daily lives. From self-driving cars to facial recognition software and programs that detect tumors, AI’s potential to serve our society is boundless. According to the statistics provided by the market research firm IDC, the global AI market is estimated to reach a size of around half a trillion US dollars by 2024, which means that this astonishingly powerful technology will continue to grow in popularity across several industries.

One of the most fascinating uses of AI is in the criminal justice system. AI can analyze a defendant’s risk and aid judges in making sentencing decisions.

Understandably, the use of AI in criminal justice is leading to worries about its own inherent bias. But when implemented correctly, algorithms can ultimately be less biased than humans. Nevertheless, there are some important social and ethical concerns, namely transparency and consistency. These need to be resolved if AI is to be used effectively to assist judges in their sentencing decisions and to provide accurate assessments of criminals’ risk and needs.

Read on to find out more about how the justice system makes use of AI, where and how AI tools can make a difference, and how transparency, bias, and personal data protection concerns can be tackled.

Background of risk/needs assessment tools

Because of human biases, variabilities, and differences in opinion, many sentencing decisions can be skewed or impacted by unrelated factors, leading to unintentionally unjust outcomes. For instance, one study discovered that judges were much more lenient in their sentencing decisions early in the morning and just after lunchtime. In contrast, they were much more likely to assign harsher sentences at the end of the day and just before their breaks.

This is only one example of how judges’ sentencing decisions can be affected by an entirely arbitrary factor (the time of day of the trial). Indeed, judges’ decisions can be influenced by a plethora of “irrelevant” elements that lead to decisions that are less than fair. For example, since judges can have vastly different opinions from one another, a sentence that seems fair and impartial to one judge may be considered absurd in the eyes of another.

Moreover, each judge has their preferred sentencing methods: some judges favor parole, while others prefer to give criminals more jail time for certain crimes. This is due to their own personal views on the effectiveness of certain modes of punishment and rehabilitation. The result is that a criminal’s sentence could vary greatly, simply due to which judge happens to be sentencing them.

A possible solution to this bias would be to implement artificially intelligent algorithms to assist judges in making sentencing decisions. One of the chief ways that AI is currently being used in this field is in systems called risk/needs assessment tools: algorithms that will use data about a defendant to analyze their risk of recidivism. The higher the risk assessment, the more likely the criminal will repeat the crime.

Risk/needs assessment tools have been used for almost a century to try to reduce the number of incarcerated individuals who have a low risk of recidivism, and to enable the justice system to effectively help and sentence individuals as productively as possible.

However, it wasn’t until 1998 that AI was enlisted. The use of AI for this purpose is a significant progression from previous risk/needs assessment tools, as many of those tools consisted of interviews and questionnaires. These were less reliable, as the data they produced could not be analyzed as effectively and impartially as the AI-enabled, fourth-generation tools.

Get to know more about how AI is used in game development

Overview of risk/needs assessment tools

A person examining criminal scheme and outcome

Currently, the most popular risk/needs assessment tools are COMPAS and PSA. Both of them are widely used in the justice system but their functioning principles are quite different.

Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)

The most notable of these risk/needs assessment tools is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS can predict an offender’s rate of recidivism, risk of violent recidivism, and failure to appear in court using various data points regarding the individual. COMPAS divides these into static and dynamic factors. Static factors include prior arrests, whereas dynamic factors include substance abuse, employment history, and pessimism. The tool then analyzes this data and compiles a score for recidivism, violent recidivism, and so on.

COMPAS has numerous benefits over human judges’ predictions. It can predict risks without any of the subjective factors that are prevalent in human-monitored and controlled risk/needs assessments. Previous risk assessment systems used questionnaires that humans analyzed in order to predict and examine a criminal’s risk and needs. However, these were not as effective because the data from the questionnaires were analyzed by people who inevitably imparted their own biases into the predictions, possibly giving more weight to certain data points as opposed to others.

Unlike these tools, COMPAS is based entirely on past data and not on any subjective or opinion-based factors. With the potential to eliminate judges’ biases, it could represent an immense advance in criminal sentencing. However, because the tool is based on earlier cases that were decided by human judges with their own biases, there is a concern that COMPAS finds patterns of bias against certain groups in its data, leading it to become biased itself.

Public Safety Assessment (PSA)

A less biased risk assessment tool is the Public Safety Assessment (PSA). It uses slightly different risk factors to predict the rates of recidivism, violent recidivism, and failure to appear in court.

Unlike COMPAS, PSA makes its decisions with no regard to considerations such as socioeconomic status and self-efficacy. Instead, it bases its predictions on nine risk factors: age at current arrest, current violent offense, pending charge at the time of the offense, a prior misdemeanor conviction, prior felony conviction, prior violent crime conviction, prior failure to appear at a pretrial hearing in the past two years, prior failure to appear at a hearing more than two years ago, and prior sentence to incarceration. It then weighs up each of these factors and adds up the total number of points the individual scored depending on their risk factors. This turns into their risk score, which serves to predict the likelihood of a criminal reoffending or not turning up for trial.

There are a few key differences that distinguish COMPAS from PSA and emphasize the importance of factors such as transparency and choice of data points in eliminating bias. The first is that, unlike COMPAS, PSA only analyzes risk factors that directly pertain to criminal history (with the exception of age).

The second has to do with differing levels of transparency. While COMPAS keeps its method of determining an offender’s risk confidential, PSA has published the factors and methods it uses. PSA’s decision to release its algorithm is significant because it allows judges who use PSA to better analyze its strengths and weaknesses as a pretrial risk assessment tool. In contrast, the COMPAS algorithm is kept confidential by Equivant, and this has considerable drawbacks when it comes to its use.

Find out more about this AI-powered retina disease diagnosis tool

Automation in prisons

A policeman next to a criminal in jail

New and advanced technologies are being used in the post-conviction stage as well. In prisons, artificial intelligence is used for the automation of security and rehabilitation of prisoners. For example, one Chinese prison that houses high-profile criminals is said to be installing an AI network that will recognize and track prisoners 24/7 and notify guards in case something seems to be out of place.

AI tools are also used to determine the criminogenic needs of offenders who can potentially be helped to change through special treatment. In Finnish prisons, for instance, there is a training scheme for inmates that uses AI training algorithms. The inmates are asked to answer uncomplicated questions or to examine pieces of content gathered from social media as well as the wider internet. These activities provide the data for Vainu, the company that arranges the prison work and provides prisoners with new job-related skills that can help them become part of society again once they have served their sentences.

As academics point out, it is possible to use AI to deal with the solitary confinement crisis in the USA. Smart assistants, similar to Amazon’s Alexa, can be employed as confinement companions for prisoners. These smart companions can reduce the psychological stress for some inmates. By extension, the issue of solitary confinement and its harms will cease to be so contentious. Thanks to AI criminal use we might even see the legitimation of solitary confinement penal policy.

Check out this AI-based shared grocery shopping list app

AI neuroprediction of recidivism

When it comes to predicting recidivism, AI algorithms tend to demonstrate impressive results.

A study conducted by researchers at Stanford University and the University of California at Berkeley discovered that risk assessment tools are considerably better than humans at clarifying the complexity of the criminal justice system and providing more accurate decisions. Consider this: when just a few variables are involved, humans are perfectly capable of predicting for themselves which defendants will later be arrested for another crime. But when a larger number of aspects are at play, the algorithms normally surpass humans. In some tests, they were almost 90% accurate in predicting which defendants run the chances of being arrested again. Humans, by contrast, demonstrated an accuracy of only about 60%.

So despite the controversy around algorithm-based tools, research studies have shown that in contexts resembling real criminal justice settings, risk assessment tools provide more accurate and precise results compared to human judgement.

Transparency considerations

In order for AI to be used effectively in the criminal justice system, the rights of individuals to know how the AI and algorithms work must be reconciled with the rights of corporations to protect their data and material. Furthermore, it’s necessary to assess whether or not AI should be held to a higher standard of transparency than human decision-making.

Some advanced AI, known as neural networks, works as a “black-box system”. This means that even the creators of the AI don’t fully understand how it makes its decisions. Imagine machine learning as a box with knobs on it: turn the knobs, and the output will be different. However, no one fully understands what goes on inside the box. It is a complex system that learns by being fed data, but we cannot comprehend exactly how it analyzes the data to make decisions.

Because algorithms are shrouded in secrecy, and it is impossible to explain their reasoning, judges who use the tool are not able to accurately assess its benefits and drawbacks. Instead, they are forced to take only the score they are given and use it to make sentencing decisions without regard to any context from which it was derived. If judges rely too heavily on algorithms without knowing exactly how they work, they could find themselves making much more biased decisions than when working the traditional way, without the algorithm.

Bias implications

As we have seen, the main ethical consideration when using AI in criminal justice as a risk/needs assessment tool is bias. Bias can be introduced into AI in many different ways. If the AI is a neural network, it must be fed training data. For example, if an AI is made to distinguish between dogs and cats, it must be fed images of both so that it “knows” which ones are which.

Thus, a neural network that acts as a risk/needs assessment tool needs to be fed data about certain criminals and whether they reoffended. However, the data that the AI receives often reflects a bias that is already present in humans and exacerbates it.

Ultimately, humans and machines can both be biased. Two different people can have two widely different interpretations of the law and perspectives on punishment for a certain crime. Algorithms are biased because of the data they are given and how they interpret it. These two kinds of biases are fundamentally distinct. AI bias is a computing error that was brought in by an aspect of the very human bias they serve to eradicate.

It can be argued that human bias consists of different people interpreting the law in different ways to achieve justice in the manner that they see fit. Nevertheless, when we look at data, we can see that what is posing as a thriving justice system could actually be made up of humans failing to be consistent in their own decisions or failing to separate their own personal views from the impartiality and reason required to be an effective judge. An important thing to remember is that human decision-making will not improve anytime soon, whereas AI is constantly developing. We are learning to take the bias out of our algorithms in a way it’s impossible to do within our minds.

Personal data protection

A criminal trying to steal protected data

Some people may fear that artificial intelligence uses personal information in ways that can violate privacy. This is why data subjects should give their explicit consent to the processing of their personal data, as well as to the data minimization principle, the principle of purpose limitation, and the set of rights relating to occasions when automated decision-making is permitted.

The General Data Protection Regulation (GDPR) provides some points of reference here. When data is being automatically processed, the data controller must apply suitable measures to protect the data subject’s rights and freedoms as well as legitimate interests.

The GDPR also comprises the right of the data subject to get “meaningful information about the logic involved” during the automated processing procedure.

Automated decisions that have adverse legal effects in relation to the data subject or can considerably affect them are forbidden according to Article 11 of the Law Enforcement Directive unless they are authorized by Union or Member State law that also has to secure relevant safeguards for the rights and freedoms of the data subject. Under the provisions of the Law Enforcement Directive, judicial decisions made by an algorithmic tool only can never be legal.

Presumption of innocence

From time to time, the use of algorithmic tools in criminal procedures can lead to violations of the right to a fair trial, particularly in regard to the right to a randomly selected judge, the right to an independent and impartial tribunal, and the presumption of innocence.

The presumption of innocence is regarded as a fundamental principle of the common law. It means that every person should be presumed innocent until proven guilty following a fair trial.

Of course, some people express concerns that AI tools may lead to biased and inaccurate predictions. However, by taking into account the presumption of innocence and the aim of reducing the number of people held before trial, AI decision-making systems are being improved to assist judges in identifying reasons to release people rather than solely focusing on individuals’ risks.

AI and criminal justice: the perspectives

In order for AI to be used as effectively as possible in the justice system, we must work to resolve these social and ethical considerations. We should also ask ourselves how important it is for the algorithm to be fully transparent because in some cases complete transparency is impossible in both AI and human decision-making.

Bias is an issue in both humans and AI, but much of the algorithmic bias is introduced as a result of human biases in AI’s training data. That’s why ensuring that training data is unbiased should be the top priority when deploying artificially intelligent risk/needs assessment tools.

With this in mind, it’s highly advisable to turn to a team of experienced professionals who will consider all angles of the justice system and do their best to implement AI effectively and correctly. We at PixelPlex have been specializing in delivering top-notch AI solutions that have helped our customers achieve the desired results. Go drop us a line and let’s start working on your trailblazing project today!

author

Kira Belova

Technical Writer

Get updates about blockchain, technologies and our company

We will process the personal data you provide in accordance with our Privacy policy. You can unsubscribe or change your preferences at any time by clicking the link in any email.

Follow us on social networks and don't miss the latest tech news

  • facebook
  • twitter
  • linkedin
  • instagram
Stay tuned and add value to your feed