Q&A: UMass’ Erik Learned-Miller

Why Facial-Recognition Technologies Need Their Own FDA

By Jeff Benson

First of two parts.

The Food and Drug Administration (FDA) has worked to ensure the nation’s food and drug supplies are safe and effective since its initial founding in 1927.

In a white paper released last month, “Facial Recognition Technologies in the Wild: A Call for a Federal Office,” four researchers argue that emerging facial-recognition tech needs its own version of the FDA. 

Co-author Erik Learned-Miller, a professor of computer science at the University of Massachusetts Amherst, believes facial-recognition technologies (FRTs) are too complex for legislation alone to be effective.

You argue the government should create a federal office for regulating FRTs. Without regulation, what privacy abuses can occur?

Where we don’t want to get to is a place like what’s going on in China, where really the government is tracking and monitoring a large number of its people — not just through face-recognition technologies, but it’s one component: from intercepting cell phone signals to tracking people.

One of our fears is that we end up at a place like that — and we really don’t want to get there.

Police have managed to get its hands on driver’s license databases, where people never agreed to have their pictures taken for a police database.

So, how did they get the right to do that?

It’s not necessarily that anybody’s violating the law. It’s more that people had their picture taken for one purpose — and then it ended up in a database being used for another purpose.

That’s one example of the kind of thing that can be concerning.

Could you briefly explain the classifications you used between different facial-recognition technologies?

We call them facial-recognition technologies because there are many, many problems that can be solved by these technologies that are related, but not exactly the same. 

For example, face detection is when a computer looks at a picture and basically tries to put a rectangle around any face it sees.

That’s face detection, but it doesn’t identify the person.

Another one is face verification. That would typically be used to unlock your smartphone — where, when you bought the phone, you took a picture and said, “I’m the owner of the phone” and later, when you want to unlock it, it takes another picture and tries to see if it’s the same person. 

Then there’s face identification, which is where you compare one person to a whole database of stored people. 

Do these classifications match up with the risk levels you describe in the paper?

Things like face detection tend to be much lower risk, not always.

But it’s when you’re trying to attach a name to somebody — and you get it wrong in a very serious application, like if you’re trying to arrest somebody or trying to decide whether to give them a job or things like that — that’s when the consequences can be pretty severe. 

Why can’t comprehensive federal legislation solve these privacy and surveillance threats?

It’s an incredibly complicated set of technologies.

I’ll just give you a simple example: San Francisco, for example, decided: “Oh, this is a no-brainer. We don’t want this creepy face-recognition stuff, so we’ll just ban it and not allow anyone in city government to use face-recognition technology.”

Then, all of a sudden, people went into work on Monday and realized they couldn’t use their iPhone 10s to unlock their own phones — because now it’s illegal.

This is just a simple example, where a … low-risk application like unlocking your own phone gets swept up in a law that’s much too broad and is targeted at maybe things that shouldn’t be available — but they don’t really know how to draw the line. 

So, basically, there are so many different ways to use face recognition that you need an incredibly complex system to govern it.

The analogy that I like to go back to over and over and over again is: “Why do we need the FDA? Why not just have a law that says the way all drugs should work?” and (have) all the regulations written in the law? 

And the answer is it’s much too complicated — and it’s also dynamic.

Things are changing all the time, so it’s better to have a group of experts that understand the differences in applications, differences in risk levels — and don’t try to encode it all in one law that has to be understood by legislators.

It might be a 40,000-page law — and then as soon as you’ve finished writing it, it’d be obsolete, because a new technology would have been developed while it was being written.

Tuesday: How a federal office for FRTs might work.

Jeff Benson is a Nevada writer.

Sources (external links):

Filed under: