Multiple white security cameras mounted on a blue wall, arranged in a grid pattern, all facing different directions. The image has a blue tint, giving it a cool, technological atmosphere.

Getty / Futurism

Cities Are Shredding Their AI Surveillance Contracts en Masse

A significant wave of public and civic backlash has led at least 30 cities across the United States to terminate their contracts with Flock Safety, an AI surveillance company whose ambitious CEO has publicly stated a goal to “end all crime” within the decade by blanketing the nation with an ever-present network of security cameras. This striking development, first reported by *NPR*, underscores a growing public unease and mounting pressure from privacy advocates and grassroots organizations against the pervasive spread of artificial intelligence-powered surveillance technology in local communities.

The impetus behind this surge in contract cancellations can largely be attributed to the tireless efforts of activists, who are successfully mobilizing communities to push back against what they perceive as an encroachment on civil liberties. Will Freeman, a Colorado-based organizer and the architect behind DeFlock.org, a crucial online platform for tracking these surveillance devices, told *NPR* that a palpable shift in public sentiment is underway. “We are seeing a lot more momentum,” Freeman explained, projecting that “more cities [will be] dropping Flock” as awareness continues to spread.

These grassroots campaigns have already yielded tangible results, compelling municipalities such as Flagstaff, Arizona; Eugene, Oregon; and Santa Cruz, California, to sever their ties with Flock Safety. The decision in these cities often came after intense community debate and local protests, highlighting the power of organized citizen action. Flagstaff Mayor Becky Daggett, reflecting on the community’s outcry, candidly admitted to *NPR*, “In the end, it was just clear that this wasn’t going to be a technology that was going to be well received or that we could continue to use.” This sentiment echoes across other cities where residents have voiced strong concerns about privacy, data security, and the potential for abuse.

Flock Safety operates by deploying sophisticated Artificial Intelligence-powered License Plate Readers (ALPRs) and other sensor technologies that continuously scan and record vehicle movements. The company’s marketing often focuses on its ability to help law enforcement solve crimes more efficiently by providing leads and evidence. While the promise of enhanced public safety is undeniably attractive to city councils and police departments, critics argue that the trade-off—a ubiquitous surveillance network that logs the movements of every vehicle, innocent or not—is too high a price to pay for a free society.

DeFlock.org serves as a vital resource in this ongoing debate. It is an open-source web application meticulously designed to map and track license plate readers across the United States. While Flock Safety is currently the most prominent vendor in this space, DeFlock’s interactive map already logs over 77,000 AI license plate readers from various companies, painting a stark picture of the widespread adoption of this technology. The platform not only exposes the sheer scale of this surveillance infrastructure but also educates the public on the often-overlooked implications.

Superficially, license plate readers might appear innocuous, merely passive observers of traffic flow, seemingly posing no threat unless one is actively engaged in criminal activity. However, as DeFlock and numerous civil liberties organizations like the Electronic Frontier Foundation (EFF) have meticulously documented, these devices harbor a multitude of hidden dangers for ordinary, law-abiding residents.

One of the most significant concerns is the creation of detailed, immutable records of an individual’s location history. Every time a vehicle passes an ALPR, its license plate, time, and location are logged, building an extensive digital trail of movements. This data, often retained for months or even years, can reveal patterns of life, associations, and personal habits, creating a surveillance dragnet far beyond what most citizens realize. Such granular location data, critics argue, is ripe for misuse, whether by government agencies, commercial entities, or even individual bad actors with unauthorized access.

The dangers are not merely theoretical. There have been alarming instances of flawed data from AI surveillance software leading to severe consequences. In a particularly illustrative case from October, a Denver woman found herself wrongly accused of stealing a package valued at a mere $25. Police, relying on Flock Safety data, determined that her vehicle had been in the vicinity on the day of the alleged theft. It was only after she provided irrefutable GPS data from her own devices, proving she had driven through the area without stopping, that the charges were dropped. This incident powerfully demonstrated how law enforcement, without proper scrutiny, can treat AI-generated data as definitive evidence, leading to wrongful arrests, immense personal distress, legal fees, and a profound erosion of trust in both technology and the justice system.

Furthermore, the deployment of ALPRs has been linked to issues of racial profiling. Studies and investigations by organizations like the EFF have highlighted how these technologies, when combined with other biased datasets or used without proper oversight, can disproportionately target specific racial or ethnic communities. The potential for these tools to reinforce existing biases within policing is a critical concern for advocates of social justice and civil rights.

Perhaps most egregious are the documented cases of outright abuse by those entrusted with access to this powerful surveillance infrastructure. In a shocking example, a Georgia police chief was arrested for using Flock cameras for personal stalking and harassment, including searching data related to individuals in Capitola earlier that year. Such incidents lay bare the stark reality that even with the best intentions, advanced surveillance tools, when placed in the hands of individuals, are susceptible to profound abuses of power, directly violating citizens’ rights to privacy and security.

The opposition to Flock Safety has intensified dramatically, particularly as the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE) have begun to leverage and “weaponize” AI surveillance tools, especially under administrations like Donald Trump’s. The integration of ALPR data into immigration enforcement efforts allows these agencies to track and apprehend individuals, including undocumented immigrants and even those legally protesting government actions. This development has transformed what some see as a local crime-fighting tool into a component of a much larger, more dystopian crackdown on immigrant communities and dissent, further galvanizing privacy advocates and civil liberties groups who argue that companies like Flock Safety bear a moral responsibility for how their technology is used.

The controversy surrounding Flock Safety and similar AI surveillance companies is a microcosm of a larger societal debate about the balance between public safety and individual privacy in an increasingly digitized world. Critics often invoke the “slippery slope” argument, warning that what begins as relatively benign license plate readers can rapidly evolve into more invasive forms of mass surveillance, including ubiquitous facial recognition, predictive policing algorithms, and comprehensive data aggregation that creates a permanent, searchable record of virtually every citizen’s life.

As more cities follow the lead of Flagstaff, Eugene, and Santa Cruz, the ongoing struggle highlights the critical role of citizen activism and local government oversight in pushing back against unchecked technological expansion. The widespread cancellation of these contracts signals a growing public awareness and a collective demand for greater transparency, accountability, and ethical considerations in the deployment of artificial intelligence in public spaces. It underscores the profound importance of safeguarding fundamental rights in an era where technological advancements constantly challenge traditional notions of privacy and freedom, ensuring that the promise of “ending crime” does not come at the cost of a surveillance state.