Categories AI

Can AI Tools Improve Policing?

In the dynamic realm of law enforcement, police officers often operate under immense pressure, needing to make quick decisions with limited information. Whether they are investigating a crime or patrolling a neighborhood, they frequently rely on intuition and experience to guide their choices.

This approach, sometimes termed as “gut policing,” is not mere guesswork; rather, it involves rapid pattern recognition. These instincts are shaped by extensive training and years of real-world engagement, allowing officers to discern what is significant in unpredictable situations.

However, intuition is not the only tool at police officers’ disposal. Many law enforcement agencies are now embracing AI-enabled technologies, such as predictive policing algorithms that identify potential crime hotspots and offender assessment systems that enhance decision-making processes.




Read more:
A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up


This trend is part of a broader global shift in policing, where AI tools are integrated into daily operations. These advanced technologies analyze vast amounts of data and patterns that would be beyond the capacity of any single officer to evaluate swiftly, aiming to base decisions on concrete evidence rather than solely on instinct or experience.

Public sentiment suggests many are supportive of AI technology in policing, provided there are clear guidelines governing its use.



While AI has often been viewed as a potential threat to employment, what is the actual impact? In this series, we examine how AI is affecting various professions and how individuals in these roles perceive their new technological companions.


In England, law enforcement agencies are already utilizing AI in their daily operations. Tools like Untrite Thrive assist police control rooms in optimizing resource allocation. Another tool, Qlik Sense, is employed by Avon and Somerset Police to assess the risk of reoffending or committing future crimes. These innovations are in line with a government agenda focusing on enhancing efficiency and reducing costs.

However, as reliance shifts from human judgment to automated predictions, the value of traditional police intuition may diminish. Instances have emerged where AI tools mistakenly identified individuals, locations, or risks.

Unverified Information

A recent report from a House of Commons select committee underscored significant failures in the West Midlands Police’s use of Microsoft Copilot when it attempted to prevent Israeli fans of Maccabi Tel Aviv from traveling to Birmingham for a Europa League match against Aston Villa last November.

The police’s assertions regarding potential disorder involving Maccabi fans were based on erroneous data generated by the AI tool, including a fictitious game between Maccabi and West Ham United.

“Information that deemed Maccabi fans as a high risk was accepted without adequate scrutiny,” noted committee chair Karen Bradley. “Alarmingly, this included unverified information produced by AI.”

Such inaccuracies were relayed by senior officers during safety meetings, demonstrating a concerning lack of diligence and an overreliance on unverified AI outputs. The matter is now under investigation by the Independent Office for Police Conduct.

Video: Channel 4 News.

This incident is far from isolated. The Harm Assessment Risk Tool used by Durham Constabulary has shown significant flaws, including overestimating reoffending risks and inherent biases in its data. Similarly, the Metropolitan Police’s now-defunct Gang Matrix faced criticism from the Information Commissioner’s Office for unfairly designating young black men as high-risk due to flawed scoring systems.

Utilizing AI-driven tools is a double-edged sword. While they can enhance decision-making, they may also perpetuate bias and magnify errors. Based on experiences working with police across England, we find that AI-supported decision-making is most effective when officers integrate their firsthand experiences with data-driven insights.

Reinforcing Biases

Our ongoing study on AI in policing reveals that an uncritical dependence on AI may bolster existing biases, particularly affecting vulnerable and marginalized communities.

Our preliminary research, yet to be published, indicates that effective AI utilization involves a challenging balance: officers must simultaneously trust and question AI recommendations, maintaining a watchful mindset.

To mitigate bias in AI-driven decisions, police departments should provide bias-awareness training, equipping officers to frequently and constructively challenge AI outputs.

The National Police Chiefs’ Council covenant mandates that AI should supplement, not supplant, human judgment. This is a positive direction, yet the principle can falter if officers regard AI recommendations as absolute truths instead of guidance needing thorough scrutiny.

These issues gain urgency with the government’s introduction of a national predictive policing prototype announced in August 2025. Scheduled for national implementation by 2030, this system merges AI-driven crime mapping with behavioral pattern analysis, bolstered by an initial £4 million investment.

It draws on a multitude of data from police forces, local councils, and social services, and builds upon the expanding use of live facial recognition vans operating across seven police forces in England and Wales.




Read more:
Facial recognition technology used by police is now very accurate – but public understanding lags behind


Simultaneously, developments within police organizations illustrate the limitations of relying solely on technology. The Metropolitan Police has begun using AI to identify potential officer misconduct by examining internal data, such as records of illness, absence, and overtime.

While the Met claims such systems enhance standards and rebuild public trust, critics caution that these tools risk misinterpreting workplace pressures as misconduct, potentially undermining accountability rather than reinforcing it.

Ultimately, the effectiveness of AI in policing hinges on the governance surrounding its deployment. A vigilant human presence in each AI decision-making loop should be a fundamental safeguard.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like