Police trial University of Cambridge Artificial Intelligence
Police in the front line of tricky risk-based judgements are trialling an AI system trained by Cambridge University criminologists to offer guidance based on the outcomes of five years of criminal histories.
The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review.
Dr Geoffrey Barnes and Professor Lawrence Sherman, from the Jerry Lee Centre for Experimental Criminology in the University of Cambridge’s Institute of Criminology, have been working with police forces around the world to ask whether AI can help ensure the right decisions are made in the heat of the moment.
Dr Barnes says: “It’s 3am on Saturday morning. The man in front of you has been caught in possession of drugs. He has no weapons, and no record of any violent or serious crimes. Do you let the man out on police bail the next morning or keep him locked up for two days to ensure he comes to court on Monday?”
The kind of scenario Dr Barnes describes – whether to detain a suspect in police custody or release them on bail – occurs hundreds of thousands of times a year across the UK. The outcome of this decision could be major for the suspect, for public safety and for the police.
“The police officers who make these custody decisions are highly experienced,” explains Dr Barnes. “But all their knowledge and policing skills can’t tell them the one thing they need to now most about the suspect – how likely is it that he or she is going to cause major harm if they are released? This is a job that really scares people – they are at the front line of risk-based decision-making.”
Professor Sherman adds: “Imagine a situation where the officer has the benefit of a hundred thousand, and more, real previous experiences of custody decisions? No one person can have that number of experiences, but a machine can.”
In mid-2016, with funding from the Monument Trust, the researchers installed the world’s first AI tool for helping police make custodial decisions in Durham Constabulary.
Called the Harm Assessment Risk Tool (HART), the AI-based technology uses 104,000 histories of people previously arrested and processed in Durham custody suites over the course of five years, with a two-year follow-up for each custody decision.
Using a method called ‘random forests’, the model looks at vast numbers of combinations of ‘predictor values’, the majority of which focus on the suspect’s offending history, as well as age, gender and geographical area.
“These variables are combined in thousands of different ways before a final forecasted conclusion is reached,” explains Dr Barnes.
“Imagine a human holding this number of variables in their head and making all of these connections before making a decision. Our minds simply can’t do it.”
The aim of HART is to categorise whether in the next two years an offender is high risk (highly likely to commit a new serious offence such as murder, aggravated violence, sexual crimes or robbery); moderate risk (likely to commit a non-serious offence); or low risk (unlikely to commit any offence).
“The need for good prediction is not just about identifying the dangerous people,” explains Professor Sherman. “It’s also about identifying people who definitely are not dangerous. For every case of a suspect on bail who kills someone, there are tens of thousands of non-violent suspects who are locked up longer than necessary.”
Durham Constabulary want to identify the ‘moderate-risk’ group – who account for just under half of all suspects according to the statistics generated by HART.
These individuals might benefit from their Checkpoint programme, which aims to tackle the root causes of offending and offer an alternative to prosecution that they hope will turn moderate risks into low risks.
“It’s needles and haystacks,” says Professor Sherman. “On the one hand, the dangerous ‘needles’ are too rare for anyone to meet often enough to spot them on sight. On the other, the ‘hay’ poses no threat and keeping them in custody wastes resources and may even do more harm than good.”
A randomised controlled trial is currently under way in Durham to test the use of Checkpoint among those forecast as moderate risk.
HART is also being refreshed with more recent data – a step that Dr Barnes explains will be an important part of this sort of tool: “A human decision-maker might adapt immediately to a changing context – such as a prioritisation of certain offences, like hate crime – but the same cannot necessarily be said of an algorithmic tool.
“This suggests the need for careful and constant scrutiny of the predictors used and for frequently refreshing the algorithm with more recent historical data.”
The researchers stress that HART’s output is for guidance only and that the ultimate decision is that of the police officer in charge.
“HART uses Durham’s data and so it’s only relevant for offences committed in the jurisdiction of Durham Constabulary. This limitation is one of the reasons why such models should be regarded as supporting human decision-makers not replacing them,” explains Dr Barnes.
“These technologies are not, of themselves, silver bullets for law enforcement and neither are they sinister machinations of a so-called surveillance state.”
Some decisions, says Sherman, have too great an impact on society and the welfare of individuals for them to be influenced by an emerging technology.
Where AI-based tools provide great promise, however, is to use the forecasting of offenders’ risk level for effective ‘triage’, as Professor Sherman describes: “The police service is under pressure to do more with less, to target resources more efficiently, and to keep the public safe.
“The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review. At the same time, better triaging can lead to the right offenders receiving release decisions that benefit both them and society.”
• PHOTOGRAPH SHOWS: Professor Lawrence Sherman