Q&A: University of Chicago’s Ben Zhao and Heather Zheng

‘Fawkes’ Tool Protects Against Unregulated Facial Software

Examples of original photos and versions that have been “cloaked” by the Fawkes tool created by a team at the University of Chicago. Team co-leaders Heather Zheng and Ben Zhao are pictured on the bottom row. Credit: SAND Lab, University of Chicago.

By Rachel Looker

With the abundance of surveillance cameras in stores, at traffic lights and in most people’s pockets, the possibility of your face being captured for unsavory purposes has become more prevalent than ever before.

To protect individual privacy, University of Chicago professors Ben Zhao and Heather Zheng led a team to create the “Fawkes” algorithmic and software tool.

The device “cloaks” personal photos uploaded online such that any that are run through unauthorized facial-recognition software will be misidentified. 

Zhao and Zheng are Neubauer professors of computer science and direct Chicago’s Security, Algorithms, Networking and Data (SAND) Lab.

They told Digital Privacy News that cloaking photos was critical to preventing third parties from building unauthorized facial-recognition models and misusing images.

What is “Fawkes” and how does it work to protect individual privacy?

Zhao: Fawkes is designed to allow you to protect yourself while sharing your photos online, in the sense that your content should not be used to train unauthorized facial-recognition models and then used to invade your privacy.

Models like Clearview AI have shown that you can go online, collect lots of people’s online photographs and use them to build a facial-recognition model of you.

If you’re walking down the street — and whether it’s a surveillance camera or some random person who wants to take a photo and then identify you through that photo and all your other information that’s online and associate that together — they could use these facial-recognition models to do that.

Fawkes was designed such that if you apply it to your own photos before you release them online, Fawkes breaks this process.

It makes it so that models that are taking these photos of you and are trying to make and build models of you will fail.

How would Fawkes make these models fail?

It’s not like a virus; it’s more like a poison attack.

“It’s not like a virus; it’s more like a poison attack.”

Ben Zhao, University of Chicago.

What does that mean?

Zhao: It attaches this poison to your photos in a way that has minimal implications on how it looks to the visual eye — but when it is treated as training data for a facial-recognition model, it makes a huge difference.

Your photos will effectively act like a Trojan horse and corrupt any model that tries to be built using your photos.

If someone were to train a model based on your photos, they would fail — and they would get an impression of what you look like that’s actually quite different from the real you.

How does this “poison attack” work to make facial-recognition models fail?

Zhao: Poison corrupts the model itself rather than any instance of you.

Once I poison the model — and assuming that we do it correctly — then you’re safe forever and ever.

Any photos of you that are un-doctored, that are leaked out, that your friends tagged on Facebook — or just you at the grocery store not knowing that someone is taking a photo of you — all those things are fine, because the model itself is broken — so it will not recognize correct photos of you. 

Why create this tool?

Zheng: We realize that AI has been widely used because it is so easy to trend — and the amount of data online makes the data source very easy to obtain. They can be easily misused.

That’s what we’re looking at — whether it’s IoT devices or cameras or anything. What can an individual user do to protect themselves against this?

This work is more about how you don’t want to wear a hat or other things.

A lot of times, these cameras are taking your photos when you’re not wearing these disguises.

That’s why “the poison attack is basically protecting you against these unknown pictures taken of you” — and you don’t need to wear a disguise every single day.

“The poison attack is basically protecting you against these unknown pictures taken of you.”

Heather Zheng, University of Chicago.

Fawkes targets unauthorized use of personal images, but “has no effect on models built using legitimately obtained images, such as those used by law enforcement,” according to a University of Chicago article. What does this mean, particularly in the growing debate over law enforcement illegitimately obtaining personal images?

Zhao: There are other people who are more well versed in this topic in terms of ethics and politics and what is the right regulatory environment.

Society’s views are changing and, over time, are evolving.

This is a reasonable position to take, because it may be that certain agencies are going beyond what is expected of them in terms of reach for data and for tools that maybe people don’t expect or don’t want them to have.

What we’ve done here is drawn a fairly straightforward line between black and white that I think most people can appreciate.

This idea that if you give permission or consent to having a picture voluntarily handed over to a particular agency, then that is fair game for them to build facial recognition on.

If it’s just public photos that are lying around in cyberspace and online, then that should be off limits.

For now, it seems like this line is reasonable — and in the sense that police and law-enforcement agencies can, within their power, subpoena photos of you and content of you and — if necessary, if the legal rights permit — then they can build facial-recognition models.

This does not preclude any of that from happening.

It does, however, mean that (companies) like Clearview and others that people do not want their content to be accessible to, those agencies are basically off limits, in essence.

This tool hurts those agencies.

We would be fine and happy if those (companies) disappeared entirely.

This tool effectively only takes a stance against that — but the legal and regulatory other issues, we’ll let the politicians sort out. 

How do you see Fawkes being incorporated into users’ daily lives?

Zhao: Everybody is going to be a little bit different.

For any sort of technology like this, there’s always that adoption curve. If you want to make it easy, you want to make it low-friction so a lot of users will try it and not be disturbed by it.

Zheng: Your browser now has a privacy mode. You can also make these photo uploads as a privacy load.

If your camera has a computing machine, it can directly generate two versions: one version for upload, one version maybe for your local storage.

It’s just where you put the computation power and where do you optimize the storage and everything else.

By making Fawkes available and getting people to recognize that, then you can essentially integrate down to the lowest level where the image comes from.

You can also integrate it into photo-editing tools that can also be integrated in as a feature, like a privacy feature.

This privacy feature can be integrated into anywhere you can generate a photo before you upload. 

For an average social media consumer, what advice would you give them on how to protect images from third-party companies?

Zheng: Cloak, cloak your photo from now on! Especially if you’re young.

Cloak your photo as early as possible because as you grow, your face changes.

Even when we’re growing old, they (facial-recognition companies) need to update their facial-recognition models to use your new photos.

Cloak your photos as much as you can, so you’re protected against anybody in that domain.

Zhao: That’s the only thing we can advise.

It’s not like some of the other things out there, where there’s a bunch of existing tools to help you.

This is really, as far as we understand, the first of its kind against a new threat that people were not aware.

Try to use our tool, try to keep their eyes open for other tools like that.

The privacy landscape will change — and some of these companies will evolve, and the tools will have to evolve along with them to protect your privacy.

This is yet another dimension that people have to watch out for to keep your privacy protected in the future. 

Rachel Looker is a writer in Washington, D.C. 

Sources:

Filed under: