Q&A: Georgetown Law’s Clare Garvie

Face Technology Could Stymie First Amendment-Protected Activities

By C.J. Thompson

“You are probably in a criminal face-recognition network,” is a chilling statement from “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” a 2016 report by the Center on Privacy & Technology at the Georgetown University School of Law.

Given the high participation and law-enforcement surveillance at recent demonstrations against police brutality, the statement likely is true now for even more people.

As the report documented, facial-recognition regulation is spotty to nonexistent across the growing number of police departments that employ it.

The document included 30 recommendations that have served as a reference for lawmakers as demands for regulation and oversight have increased.

Clare Garvie, coauthor of the report and senior associate at the Georgetown Center, told Digital Privacy News that the U.S. was now at a pivotal — and very necessary — public-awareness crossroads. 

Amazon, Microsoft and IBM recently pledged to delay (or discontinue) providing facial-recognition systems to law enforcement. What are your thoughts on these gestures?

They’re important symbolic steps.

They suggest that the risks of face recognition are recognized and taken seriously, even by large companies who stand to benefit from its sale and use.

However, these companies are not major players in the face-recognition space, meaning that the market for police face-recognition systems remains largely unchanged.

Companies like NEC, Idemia, Rank One Computing and others provide the largest contracts — and they have not made commitments to back out of the market. 

What’s been the impact of all the sudden, high-profile attention to facial-recognition technology?

Back in 2016, we were writing in an environment where nothing was known about the technology and how it was used.

Over the last four years, we’ve had so much information come out about how it’s used and the risks posed by that use.

What advocates want in light of that information is actually more strict than what we anticipated was feasible back in 2016.

Last year, we saw seven local jurisdictions in California and Massachusetts pass bans or moratoriums on face-recognition technology.

It’s very encouraging. 

Which recommendations from the report have generated the most pushback from law enforcement?

The most resistance has been to restrictions on the types of crimes face recognition can be used for, and the standard of suspicion required before a search is run.

At a public protest, you see a lot of misdemeanors — such as blocking public roadways, violating noise ordinances and gathering in groups in excess of what’s allowed in a certain space.

Face recognition could easily be used as a tool in going after those misdemeanors.

We recommend that face recognition not be used for just any crime. There are a lot of routine violations for which we do not expect enforcement.

I can’t think of anybody who hasn’t jaywalked in their life. So, we use face recognition to prosecute all jaywalkers?

I think that’s excessive. That could actually suppress free speech at a time when free speech and public protest is incredibly vital to conversations about democracy and our criminal-justice system. 

Auditing is the other thing we’ve seen a lot of resistance to in law enforcement.

When we talked to Pinellas County sheriff Bob Gualtieri (in Tampa Bay, Fla.), he said, ‘We do not police our police officers.’

There’s this idea that audits, actually reviewing and establishing internal oversight over which investigations are run and how these searches are run, would indicate mistrust in officers.

But audits are a very noncontroversial and routine way for the public to gain assurance that this technology’s not going to be misused.

So, there’s a tension there as well. 

Recently, several major cities admitted to using surveillance tech on George Floyd and Black Lives Matter protesters. In contrast, surveillance doesn’t appear to have been deployed on the armed, predominately white protests held against COVID-19 shutdowns. Is that the kind of potential usage-bias disparity that could be problematic with facial recognition? 

It’s definitely a concern.

What we’ve seen with face recognition is that it’s disproportionately used on communities of color.

In San Diego, a review the city conducted of their use of face recognition and license-plate readers found that these technologies were used up to two-and-a-half times more on communities of color than their proportions of the population of San Diego.

With face recognition, the added concern is that these systems overwhelmingly run on mugshot databases, which are based on who gets arrested in a given city.

Again, that is disproportionately individuals of color.

There is disproportionate targeting just in investigations.

In 2011, a group of law-enforcement entities conducted an impact assessment and found that face-recognition use on First Amendment-protected activity could lead to chilling effects, causing people to alter their behavior in public, leading to self-censorship and inhibition — quintessentially what we need to protect free speech from.

That was a finding by law enforcement themselves.

But we have not seen robust policies saying that you absolutely cannot use this technology on First Amendment-protected activity.

Instead, we see that language, but then carve-outs for “unless there’s a law-enforcement purpose.” Or, we see policies that don’t have that language at all.

It means there isn’t protection for protestors against use of the technology, which is very concerning. 

What kind of pressure has been the most influential toward reining in facial-recognition technology? 

Increasing public transparency has been the single largest shift.

Five years ago, these face-recognition contracts were being signed — and millions of dollars were being expended, and we had no idea it was happening.

Advocates have made a very clear showing that the risks to privacy and civil rights and free speech are very real.

Law enforcement has not made a similar showing that the benefits to public safety and crime-reduction actually have happened with this technology.

In my view, why would a city like Detroit or anywhere else spend more money on this technology that hasn’t clearly shown to have a benefit?

And it has been clearly shown to have real risks. 

Racial-bias tendencies in face recognition are noted in your report and were recently borne out in the incorrect identification and arrest of a Black man in Detroit. Why are vendors still selling this technology?

The main cause has been a complete lack of oversight. It’s a very supply-driven market.

Companies have a vested interest in selling their technology and underplaying the risks or technical challenges as they sell it to police.

And if police don’t have to get community buy-in or city council approval before purchasing, then these risks or concerns have largely flown under the radar.

It appears that it’s actually very hard to make a face-recognition system that doesn’t perform differently depending on the race, sex and age of the person being searched.

Some of the theories around gender are that women wearing makeup and changing their hairstyles means that it’s inherently harder to identify women.

That’s not something that technology can really do much with.

Another challenge is photography contrast.

Photography has developed around taking really good quality, well-developed images of white faces and not of dark faces.

That’s a photography contrasting issue that would have to be solved in the hardware, not necessarily in the software.

But I caution against a focus just on accuracy — because if we solve the accuracy issues of recognition technology, we’ve all of a sudden made it that much more powerful.

A lot of advocates say it has no place in this society and we need to ban it for those reasons.

It’s either going to be inaccurate, which is a problem — or it’s going to be perfectly accurate, which is also a problem and antithetical to our democratic norms and freedoms.

Other folks will say there’s a way to craft rules to protect against those issues while still preserving any perceived or actual benefit that the product has.

But I don’t think that anyone’s going to argue that we should continue using face-recognition technology with no rules in place.

That’s a hard argument to make now.

C.J. Thompson is a New York writer.

Filed under: