Q&A: UMass’ Erik Learned-Miller

People Don’t Trust Face Technology

By Jeff Benson

Last of two parts.

Erik Learned-Miller, a computer scientist at University of Massachusetts Amherst, believes the Food and Drug Administration (FDA) provides an effective model for regulating face-recognition technologies (FRTs).

The professor explains how an FDA-like agency could foster trust in FRTs.

You note the Food and Drug Administration has a history of regulating complex technologies. How might a new face-recognition technology go through an FDA-like process?

In a previous life, I was in the medical-device industry … . 

One of the central ideas (in the FDA) is the idea of intended use. That says that whenever you sell something like a medical device or a drug, you should specify very clearly what it’s for and what it’s not for.

So, the first step is for manufacturers to carefully say: “We’re targeting consumer applications. We’re doing face recognition for your phone, for Snapchat, for frivolous fun things. That means you cannot use our software for medium-risk or high-risk applications like interviewing job candidates and assessing their suitability for hiring or on a police bodycam.”

After they’ve done that, there’s a natural sort of risk category — because once you know what something is for, then you can talk about how serious are the errors: Does your nose look too big in some silly phone app? Or, do you lose a job because somebody misinterpreted your eyebrow raise?

The FDA has faced criticism for being too cautious in approving things like coronavirus test kits and AIDS treatments. Are there trade-offs to an FDA-like regulatory body for FRTs? 

Absolutely. There’s no question that it is a trade-off. But it’s easy to criticize the FDA.

They regulate thousands of medical devices and thousands and thousands of drugs — and when you look at the number of things for which they’re criticized, it’s not that big.

In general, if you walk into your local pharmacy and you pick something off the shelf, you have incredible confidence in it. People take this for granted, but 100 years ago you couldn’t do that — and that’s because of the FDA.

Now, if you have a disease that’s killing thousands and thousands of people per month, you have an incredible pressure to approve things — but you also want to make them safe.

You can end up hurting more people than you help if you approve things too quickly.

There’s a famous case of thalidomide (in Germany in the late 1950s and early 1960s). This is the one that causes gross deformations of newborn children.

There was a woman working at the FDA who felt it had not yet been proven safe and effective for introduction in the United States — and she blocked its introduction.

That’s a success story of holding off until you know more about something.

Of course, it could go the other way — that you’re blocking the sale and distribution of something that could help people.

But nobody knows how to solve that problem perfectly. The FDA understands those trade-offs very, very well.

Amazon just put a one-year moratorium on facial-recognition efforts so Congress could create rules around the tech. Might companies welcome a federal office acting as referee, or do they just want a list of rules?

I definitely believe that it’s in everybody’s interest, including industry but also consumers, to have something like this.

I’ll go back to the FDA analogy: A hundred years ago, if you went into a drugstore and tried to buy something for your sick child, you had no idea whether it was going to make them sicker or make them better or if it was just sugar powder.

Ultimately, that’s bad for industry if people don’t have faith in your products. And, right now, I’d say people don’t have faith in the products. 

Police departments have, for example, taken an image and tried to match it against a database and they didn’t get any matches — so, then, they try drawing a mustache on a person and they get a match.

We wrote the white paper hoping that when companies read it, they will see this is sensible.

If we’re going to sell face recognition for a social-media app, it makes sense that it shouldn’t require the rigorous testing that you would have to have if it were going to be sold to police, for example.

That’s just common sense. 

So, the marketplace can also specialize. There can be a low barrier to entry for low-risk applications.

But if you’re going to do things like give them to police departments, you really need much more infrastructure to establish what you’re doing is safe and effective — and it’s treating people equally, fairly; that people have recourse and are informed, that there’s transparency about when the technology was used on them. 

In other words, an office like this would create a natural stratification of the industry, which is totally appropriate and ultimately would help everybody.

Jeff Benson is a Nevada writer.

Sources (external links):

Filed under: