AI Tools Intensify Inequities in US Policing, Critics Warn

Artificial intelligence (AI) is increasingly integrated into policing practices across the United States, raising concerns about its effectiveness and potential to exacerbate existing injustices. Critics argue that AI technologies often reinforce systemic biases rather than promote public safety, particularly in relation to marginalized communities.

In recent years, police departments have utilized AI-driven facial recognition tools, resulting in numerous wrongful arrests. Many individuals apprehended were found to be far from the location of alleged crimes or physically incapable of committing them. Alarmingly, these errors disproportionately affect people of color, highlighting the risks of relying on automated systems that learn from historical data steeped in bias.

A journalist and writer on AI issues, Graham Lovelace, emphasized the danger of over-reliance on these technologies. He noted, “This technology can be highly unreliable, and it can cause harm.” Lovelace explained that officers often accept AI-generated identifications without sufficient verification, influenced by cultural conditioning to regard AI as inherently accurate.

The implications of AI misuse in policing are significant. Law enforcement agencies have a monopoly on violence within their communities, making the stakes of technological errors life-altering. Once individuals are flagged by AI systems, they may face a persistent label that subjects them to further scrutiny, even if they are later exonerated. Lovelace pointed out that being targeted leads to the collection of extensive personal information, creating a surveillance cycle that is difficult to escape.

As police departments increasingly deploy AI tools, the shortcuts they provide for high-stakes decision-making can amplify unethical practices. For instance, predictive policing software such as Geolitica can brand neighborhoods as crime hotspots based on police activity rather than actual criminal occurrences. This justifies increased patrols in already over-policed areas, thereby perpetuating a cycle of containment rather than addressing underlying issues.

The effectiveness of many AI systems is also under scrutiny. A study conducted by The Markup and Wired revealed that out of 23,631 predictions made by Geolitica in 2018 for the police department in Plainfield, New Jersey, fewer than 100 aligned with actual crimes in the predicted categories. Similarly, a 2023 audit by the New York City Comptroller’s office found that only 8 to 20 percent of alerts generated by ShotSpotter, a gunshot detection system, corresponded to real shootings.

Despite these findings, the New York Police Department (NYPD) has continued to invest in such technologies, spending approximately $54 million between 2015 and 2025 on ShotSpotter. In early 2023, they extended their contract with the company for another $22 million. The former mayor, Eric Adams, defended the technology, asserting its importance for public safety.

Council Member Tiffany Cában, a vocal advocate for policing reform, pointed out how the concept of public safety is often manipulated for political gain. She expressed concern that tools like ShotSpotter disproportionately target low-income communities of color, creating a dangerous environment that can lead to tragic outcomes.

Critics argue that the embrace of AI in policing is driven by a desire for modernity and efficiency, regardless of its practical effectiveness. Companies like Flock Safety promote their AI tools as enhancing police capabilities, but this approach can obscure the reality of state violence and systemic oppression. By framing traditional policing tactics as “data-driven,” law enforcement can sidestep accountability for targeting specific demographics.

As demand for AI technologies grows, startups are reaping profits. Flock Safety, valued at $7.5 billion, exemplifies this trend, providing surveillance tools that critics argue primarily serve the interests of the wealthy.

Transparency remains a significant issue as police departments often shield their contracts with AI companies from public scrutiny. Efforts by the New York City Council to mandate the release of Impact and Use Policies for surveillance tools have been met with resistance. Cában noted that the NYPD has been slow to provide meaningful information, perpetuating a lack of accountability.

The ongoing adoption of AI in policing raises fundamental questions about civil rights, privacy, and the potential for deepening societal injustices. Law Professor Andrew Guthrie Ferguson highlighted the risks of outsourcing critical decisions about guilt and innocence to private companies driven by profit rather than public welfare.

While some police departments claim to have implemented safeguards to mitigate bias and protect personal data, the effectiveness of these measures is often unverified. For instance, policies in St. Paul, Minnesota, mandate that subject matter experts verify AI-generated work for accuracy and bias. However, this relies on human operators effectively auditing complex systems whose workings remain largely opaque.

The reliance on AI technologies to tackle complex social issues indicates a troubling trend. Critics argue that the belief in technology as a panacea for systemic problems is misguided and diverts resources away from more effective solutions, such as healthcare and education. As AI surveillance becomes more pervasive, its role in perpetuating injustice may reveal a system that is less modern and more antiquated.