Press "Enter" to skip to content

Q&A: Harvard’s Latanya Sweeney

Technology Has Become the New Policymaker’

By Gaspard Le Dem

Last of three parts.

Latanya Sweeney believes the myth that people need to choose between privacy and the benefits of new technology must be dispelled.

In the last of three interviews, Sweeney, who holds a doctorate from the Massachusetts Institute of Technology, told Digital Privacy News that the public still cared about privacy, but that tech companies were deciding the rules we lived by.

This interview was edited for length and clarity.

Has public perception changed around data privacy? Have people become desensitized to their information being shared? 

I thought that until the pandemic started and contact-tracing took off.

If you talk to people about contract-tracing, what we see is really the opposite. 

There’s certainly a desire in the public that doesn’t agree with “privacy is dead.”

How so?

For years, I taught a course called “Privacy and Technology.” I always wondered why students would sign up if they weren’t really interested in privacy.

Eventually, I realized they didn’t understand what the risks were to them, as individuals.

They didn’t have a sense of “creepiness” about technology. 

But as technology has advanced, there is a sense of creepiness when you Google a product you’re interested in — and, suddenly, you’re getting ads about it.

Or, when you order takeout — and when you show up, the restaurant already knows who you are.

People are freaked out, in that regard.

On the other hand, we have Alexa in our homes, listening to our words.

“As technology has advanced, there is a sense of creepiness when you Google a product you’re interested in — and, suddenly, you’re getting ads about it.”

So, this trade-off between the benefit of technology and the loss of privacy is what we’ve been trapped in for a long time: The belief being that, in order to have the benefit of technology, you have to give up privacy.

That’s the real myth that has kept us trapped here. 

You conducted research about racial discrimination in Google AdSense in 2013. Have you since observed any other instances of racial discrimination in online algorithms? 

There are hundreds of papers on that very topic. 

Facebook recently had a regulatory intervention from (the U.S. Department of Housing and Urban Development) due to racially motivated advertising in housing.

The list is quite long.

Even an algorithm that recommends who should appear for a job using online resumes, for example, can have biases around gender or age. 

This is what I meant when I said that, being at the Federal Trade Commission, you could see the tidal wave coming.

Privacy was the first wave — and working on racial discrimination was the second wave. 

Now, the third wave is democracy itself. 

Is there systemic racism in technology? 

This is fascinating, right? 

The study on discrimination in online ads was the first one to make that case — and it’s turned out to be amazing work.

The funniest thing about it, of course, is that it was funded by Google. 

Google had given me a grant to do some interesting work. But, then, we found that ads in Google’s search results were implying that individuals with Black-sounding names had an arrest record, even if that record didn’t exist.  

“This trade-off between the benefit of technology and the loss of privacy is what we’ve been trapped in for a long time.”

So our reaction was: “This shouldn’t be happening! How do we stop it?”

How did Google respond?

Google claimed it was happening because Adsense works in a way that the more somebody clicks on an ad, the more often that version of the ad is going to show up in results.

The company argued that the discrimination was a reflection of society’s own values: That people were clicking more often on arrest ads when the name was Black, and neutral ads when the name was white. 

Another possibility was that maybe the advertisers themselves were biased: Did they really put down roughly the same ads on all-American names? 

But Google could have run the same test that I did, right?

In a fraction of a second, Google runs an auction to decide what ad is going to show up in that space on that web page. And they take lots of factors into consideration.  

How do we fix this issue?

Google gets to decide whether its platform will be a weapon of systematic racism. Will it play a role in deciding that its platform won’t be used that way?

Will they just slap the wrist of their advertiser, or will they be vigilant and actively change their algorithms?

At Harvard, our students also researched racial discrimination on Airbnb. They found that, in California, a white host on Airbnb makes about 20% more money than an Asian host.

“Privacy was the first wave — and working on racial discrimination was the second wave. … The third wave is democracy itself.”

In response, Airbnb said: “We’re going to change our platform, so that we can make sure those kinds of biases don’t exist.”

Now, they have a team that studies how other kinds of biases can sneak into the platform, so they can actively try to get rid of them.   

So, what’s next?

These examples of technological bias show us how powerful technology is —  and they prove that technology has become the new policymaker.

We don’t elect companies. We don’t vote for them.

But they get to decide the rules we live by, and the rules they’re going to protect.

You can argue all you want about systemic racism in technology — but, the fact is, systemic-racism effects have now been documented on specific platforms, and companies are the ones who get to decide how to address them. 

How can users protect themselves against algorithmic bias?

That’s the thing. Just like with privacy, there’s little an individual can do.

Whether somebody can share your medical information is not a decision you, as an individual, get to make. 

It’s always either the policymaker, the regulator, the company or the designer who have played the big roles.  

Should users give up some technologies to protect themselves and their data?

Some individuals will choose that, but it’s increasingly difficult to live in our society without the benefits of technology. 

That’s why we all have to work towards insisting on getting to these sweet spots.

Users should be able to say: “I want this new technology, but I want some privacy guarantees.”

“Google gets to decide whether its platform will be a weapon of systematic racism. Will it play a role in deciding that its platform won’t be used that way?”

It’s similar to saying: “I want a UL listing on my electrical appliance so I know I can plug it in — and it’s not going to blow up in my hand.”

What are the guarantees that we can start making for privacy around apps and various technologies?

More broadly, you’ve said that your role as a computer scientist is to help people share data rather than preventing them from doing so. But can we keep sharing data on such a massive scale before we have better privacy? 

I think the ship has sailed. Society is already providing data on a massive scale. 

What is going to be the new data economy? What are going to be the data policies that go along with that economy?

Those are the big questions that are going to be answered over the next five years. 

Because, as we look forward, the technologies that are on the horizon are all data-driven.

So, you can’t put the genie back in the bottle by saying we’re not going to have this data.

We are going to have this data — and we have to realize it’s totally tied to our economy.

“We don’t elect companies. We don’t vote for them. But they get to decide the rules we live by, and the rules they’re going to protect.”

So, what are the individual’s rights to their data in a data economy?

These are the rules we’re writing now. These are the challenges of the next five years. 

Gaspard Le Dem is a Washington writer.

Sources: