Why AI, condemning criminals, can be dangerous?

Artificial intelligence is already in some way helping to determine your future. When you are looking for something in the search engine, use a service like Netflix or the bank assesses your suitability for a mortgage. But what happens if artificial intelligence has to determine whether you are guilty or not in court? Strangely enough, in some countries this may already be happening. Recently, the American high judge John Roberts was asked if he could imagine a day when “smart machines controlled by artificial intelligence will help in the search for evidence or even in making court decisions.” He replied: “This day has come, and it essentially helps the judiciary in the production of cases.”

Perhaps Roberts was referring to the recent case of Eric Loomis, who was sentenced to six years in prison on the recommendation of secret proprietary software of a private company. Loomis, who already had a criminal history and was sentenced for fleeing the police on a stolen car, now claims that his right to the procedure was violated, because neither he nor his representatives could consider or protest the recommendation algorithm.

The report was prepared by the Compas program, which is sold to Notrpointe ships. The program embodies a new trend in AI research: it helps judges make the “best” (or at least more data-oriented) decision in court.

Although the specific details of the Loomis case remain closed, it certainly contains diagrams and numbers that determine the life, behavior and likelihood of a recurrence of Loomis. Among them, age, race, gender identity, habits, browser history and any dimensions of the skull. More precisely, no one knows.

It is known that the prosecutor in the case told the judge that Loomis demonstrated “a high risk of recidivism, violence, pre-trial proceedings”. This is standard when it comes to sentencing. The judge agreed and told Loomis that “according to Compas, he was identified as a person of high risk to society.”

The Supreme Court of Wisconsin denounced Loomis, adding that the Compas report brought valuable information to his decision, but noted that without him he had passed the same verdict. Check it for sure, of course, will not work. What can be the cognitive prejudices when an omnipotent “smart” system like Compas participates in the case, which advises judges how to act?

Unknown use

Let’s be frank, there is nothing “illegal” about what the Wisconsin court did – it’s just an example. Other courts can and will do the same.

Unfortunately, we do not know the extent to which AI and other algorithms are used in sentencing. There is an opinion that some courts “test” systems like Compas in closed studies, but they can not declare their partnership. There is also an opinion that several AI startups are developing such smart systems.

However, the use of AI in legislation does not begin and does not end with the passing of a verdict, it begins with an investigation. In the UK, a VALCRI system has already been developed that performs time-consuming analytical work in a matter of seconds – it breaks through tons of data like text, lab reports and police documents to highlight things that may require further investigation.

The West Midlands police in the UK will test VALCRI for the next three years, using anonymous data containing more than 6.5 million records. A similar test is conducted by the Antwerp police in Belgium. However, in the past, AI projects and in-depth training, including massive data sets, were problematic.

Benefits for the few

Technologies have provided many useful adaptations to courtrooms, from copying machines to extracting DNA from fingerprints and complex observing techniques. But this does not mean that any technology is an improvement.

Although using AI in investigations and sentences can potentially save time and money, it will cause acute problems. In the Compas report from ProPublica, it was clearly stated that the Black Program mistakenly considered the respondents more susceptible to recidivism as opposed to whites. Even the most complex AI systems can inherit the racial and gender biases of those who create them.

Moreover, what is the point of shifting the decision-making (at least in part) on issues that are unique to people on the algorithm? In the United States, there is a certain difficulty when a jury trial judges its peers. Standards in laws have never been standard, therefore these juries are considered the most democratic and effective systems of conviction. We make mistakes, but eventually accumulate knowledge about how not to commit them, refining the system.

Compas and similar systems represent a “black box” in the legal system. These should not be. Legal systems depend on the continuity, transparency of information and the ability to review. The society does not want the emergence of a system that encourages a race with the creation of AI startups that make fast, cheap and exclusive solutions. Haste made AI will be terrible.

An updated version of Compas with open source would be an improvement. But first we will have to raise the standards of the justice system before we begin to take responsibility for the algorithms.