Controlling the Game

New Predictive Policing Tools Employing Surveillance, Raising Privacy Fears

By Jackson Chen

After debuting more than a decade ago as a way to stop crime before it happens, predictive policing methods are shifting toward a more surveillance-based model that could lead to greater privacy concerns, experts told Digital Privacy News.

One of the earliest instances of predictive policing showed up in 2011, when the Santa Cruz Police Department in California adopted such a program after a six-month pilot period.

The program was modeled after an earthquake-aftershock tool and was used to determine where future crimes would occur based on prior agency data.

Since then, many vendors — Geolitica, CivicScape, ShotSpotter Connect, even IBM — are being used by 152 departments across the U.S., according to the Electronic Frontier Foundation’s Atlas of Surveillance tool that tracks which technologies police agencies use.

With such a strong demand for predictive policing tools, newer companies have entered the market — seeking to leverage the vast amounts of data they gather.

Companies like Knightscope, which specializes in autonomous robots equipped with cameras, and Axon, which provides departments with body and dash cameras, are looking for ways to incorporate the data they’ve gathered from their devices into analyses for their customers, experts said.

“The second generation of predictive policing is about police platforms where — like everything else in the digital world — if you control the platform, you control the game,” Andrew Ferguson, law professor at American University, told Digital Privacy News.

“What you’re seeing are different technology vendors using their particular technologies as the in to be the platform.”

New Privacy Concerns

This new surveillance-based model of predictive policing could prove to be attractive to law enforcement agencies, but they are presenting major concerns for those worried about its privacy implications, experts told Digital Privacy News.

Matthew Guariglia, policy analyst at the Electronic Frontier Foundation (EFF), said these companies had a real appetite for monetizing the data they’ve already collected through surveillance into predictive policing tools despite their many flaws.

“If you control the platform, you control the game.”

Andrew Ferguson, American University.

“Surveillance-forward predictive policing is a recipe for circumstantial evidence that could get people into trouble even when they are completely innocent,” Guariglia said.

“Just because you were at a picnic in the park when a crime was committed shouldn’t make you a suspect.”

Knightscope, founded in 2013, said on its website that its security robots could perform facial recognition, license-plate recognition, high-resolution eye-level video — as well as could detect temperature changes, among other tasks.

The company has rolled out its autonomous robots in several locations, most recently in an apartment complex in Las Vegas and at the Chumash Casino in Santa Ynez, Calif., west of Los Angeles.

The website of the Silicon Valley-based startup added that its robots had detected more than 1.4 billion device signals in its five years of commercial operation, which included cellphone signals.

The company’s K5 model, an outdoor security robot, also can store up to 90 terabytes of raw data that could be rolled into predictive policing algorithms, Knightscope’s website said.

Further, the site said that its data-gathering robots had the potential for real-time data uploads into predictive algorithms to stop crime. 

Knightscope officials did not return requests for comment from Digital Privacy News.

According to Axon’s website, the company provides body cams, dash cams and drones to many police departments across the country. The company also offers Axon Evidence, a cloud-based storage system for digital data.

So far, Axon’s AI initiatives have been limited to the automated blurring of individuals captured in videos and to automated reports generated from video and audio.

An Axon spokesperson told The Intercept in 2017 that the company was not planning to build predictive policing tools, despite a company report that year suggesting that Axon customers soon could use such technology to “predict future events.”

Axon officials also did not return requests for comment.

Others Following Suit

Meanwhile, other tech companies successfully have taken advantage of the demand for more policing tools.

ShotSpotter, which started in 1996 by offering gunshot-detection systems, purchased Hunchlab, a company that uses risk-modeling and AI to predict crimes, in 2018.

More recently, technology from Clearview AI, which is accused of eroding individual privacy by scraping billions of images from social media networks for facial-recognition purposes, has been widely adopted and used by nearly 2,000 public agencies across the U.S., according to news reports.

“Surveillance-forward predictive policing is a recipe for circumstantial evidence that could get people into trouble even when they are completely innocent.”

Matthew Guariglia, Electronic Frontier Foundation.

With so many companies looking to improve predictive policing algorithms through their own formulas, experts told Digital Privacy News that these advanced tools would further deteriorate people’s sense of privacy.

“We have largely underestimated the civil liberties, civil rights and privacy concerns of surveillance technology — including predictive policing,” AU’s Ferguson said.

“It’s invasive to peoples’ sense of security and privacy — and in many cases, it’s threatening a form of social control.”

Lingering Fears

Rachel Levinson-Waldman, deputy director of the Brennan Center for Justice’s Liberty & National Security program, told Digital Privacy News that she was concerned about the effectiveness and accuracy of these surveillance-based technologies, as well as the lingering privacy concerns that came with them.  

“The more systemic and longer-lasting concerns are the privacy concerns and what the impacts are if you can be identified anywhere under any circumstances: as you’re walking down the street, as you’re going into a store, as you’re entering a meeting,” Levinson-Waldman said.

“We do rely on some form of day-to-day anonymity — and that’s just important to peoples’ lives.” 

She noted the concerns that the data gathered by these companies could be sold to advertisers.

A dataset that a predictive policing algorithm builds on an individual, Levinson-Waldman explained, could prove very useful for advertisers trying to build targeted ads.

In addition, Guariglia argued that a crime-fighting algorithm from these data-hungry companies was no more effective than experienced beat cops patrolling neighborhoods.

He said predictive policing “operates on knowledge that police already have through lived experiences.

“If their experience tells them that a lot of car break-ins happen at this specific intersection, why do they need to spend thousands of dollars a month on an algorithm that tells the same thing?”

Sam Klepper, ShotSpotter’s vice president of marketing and product strategy, told Digital Privacy News that the company reviewed any concerns for privacy or harms to civil liberties with its products.

The company, he said, does not collect personally identifiable information and conducts third-party audits to find any potential areas to be strengthened.

In a July 2019 audit provided to Digital Privacy News, ShotSpotter posed an “extremely low” risk of voice surveillance with its gunshot-detection technology. The audit was conducted by the Policing Project at New York University.

Privacy Issues, New and Old

With predictive policing tools seeing roughly a decade of varying use, some of the fundamental privacy concerns initially raised with the first-generation iterations still are present with the newer models. 

In Pasco County, Fla., Sheriff Chris Nocco built a predictive policing tool based on arrest histories after taking office in 2011.

However, a September investigation by the Tampa Bay Times showed that the system largely was used to surveil and harass residents.

ShotSpotter, maker of gunshot-detection systems, conducts third-part audits and does not collect personally identifiable information.

Sam Klepper, ShotSpotter.

The report showed that county residents, some younger than 18, were targeted — though having only one or two arrests on their records.

In continuously checking on suspects, deputies gathered information — as well as that of family members and friends — and fed it back into the system.

Bans in Other Cities

Other cities have banned predictive policing technology outright.

As one of the first adopters of predictive policing, Santa Cruz last June became the first U.S. city to ban it. In January, Oakland, Calif., barred predictive policing and biometric-surveillance technology.

Other cities are debating the controversial tool — and EFF’s Guariglia said he foresaw more bans forthcoming.

“We’re going to see a lot of momentum to ban predictive policing, especially when people start to consider how much money departments are spending on this and whether or not there are actually any results,” he told Digital Privacy News.

If not a ban of predictive policing, the Brennan Center’s Levinson-Waldman called for more disclosure by police on how the technology worked and for opportunities for the public to comment on usage.

The Effectiveness Debate

The debate over whether predictive policing is effective has been raging for nearly as long as the technique has been used by police.

A 2013 report from Rand Corp. concluded that “these tools are not a substitute for integrated approaches to policing, nor are they a crystal ball.”

But six years later, researchers in the Netherlands called for a need for “stronger empirical assessment of these approaches to understand the relation between features of the approaches and success in reducing certain forms of crime.

“When there is more evidence available to back up the claimed benefits and drawbacks of predictive policing,” the researchers concluded, “it can be objectively determined how effective predictive policing methods are and how they can contribute to the traditional policing methods.”

AU’s Ferguson also pointed to audits on predictive policing tools in Los Angeles and Chicago, with both showing no clear benefits.

In 2019, the Los Angeles Police Department scrapped its predictive policing program following an internal audit — and Chicago ended its person-based tool after eight years, according to a report from a city government watchdog group released in January.

“There is no body of research to show that predictive policing has any crime-control benefits,” Ferguson told Digital Privacy News.

“The costs are equally hard to evaluate — as  the harm to trust, freedom from surveillance and personal security have few objective measures.”

Dutch Court Ruling

In another blow to predictive policing, the District Court of The Hague in the Netherlands ruled in March 2020 that the right to privacy should prevail over the hunt of alleged criminals.

The court ruled that SyRI, an algorithmic system trying to catch potential fraudsters by analyzing data from government agencies, violated the human right to privacy.

Photo: Brennan Center for Justice

“We do rely on some form of day-to-day anonymity — and that’s just important to peoples’ lives.”

Rachel Levinson-Waldman, Brennan Center for Justice.

According to the court, the right to respect private life, including the right to the protection of personal data, increasingly was important among the development of new technologies.

The absence of sufficient and transparent protections of these technologies, the court ruled, could have a chilling effect on the population.

Monetizing Data

Despite questions surrounding predictive policing, many companies still seem eager to monetize its data-harvesting techniques, experts told Digital Privacy News.

“What’s happening is something more subtle,” Ferguson said, “which is the idea that everything that police are doing now is datafied, every place they go in their cars is being tracked by GPS, every arrest they’re making is getting input into a data set.

“We have so much information and so much data that it’s going to be used,” he added. “The question is who are the vendors or programs that are going to use that for predictive analytics.”

Jackson Chen is a Connecticut writer.

Sources: