‘I Don’t Think We Should Champion People for Delayed Enlightenment’
By C.J. Thompson
First of three parts.
Attorney Rashida Richardson was featured in last fall’s Netflix documentary, “The Social Dilemma” for all of seven seconds — despite being interviewed for more than four hours.
In her clip, Richardson, assistant professor of law and political science at Northeastern University School of Law in Boston, addresses social media’s curation of “truth.”
The unused bulk of Richardson’s interview discussed how bias and discrimination in Big Tech pushed out experts who flagged the problems the film highlights.
Richardson, who worked at Facebook early in her career, is a leading researcher in developing legislation on the impact of technology and big data on civil rights.
She testified before the U.S. Senate in 2019 about Big Tech algorithms. Richardson was one of the architects of the New York Public Oversight of Surveillance Technologies (POST) Act, which took effect in January and requires the New York City Police Department to disclose surveillance methods.
A graduate of Wesleyan University and Northeastern’s law school, Richardson also worked as legislative counsel for the American Civil Liberties Union and was a senior fellow for the German Marshall Fund of the United States.
In the first of three interviews, Richardson spoke to Digital Privacy News about the past, present and future of privacy — and shared insights on “The Social Dilemma.”
This interview was edited for length and clarity.
In “The Social Dilemma,” it’s explained how psychology was used to optimize social media’s addictive qualities. It’s a tough sell that the creators were naïve about the negative effects that would happen.
That’s what opened the movie to a lot of criticism.
In some ways, they did want to focus on this evolution of people who thought this was a good idea and now see the problems.
“People should be allowed to evolve — and that should be encouraged.”
But at the same time, that narrative ignores a lot of the toxicity that exists in that space.
Like, if you question or critique the business model in any way you are pushed out — and often have a scarlet letter on you, where you won’t get a job in the industry anymore.
I think a lot of the outrage due to the firing of Timnit Gebru (a leading Black woman computer scientist let go by Google in December) amplifies that.
It’s also very difficult for people who saw these problems early, and have always been critical, to hear this narrative and have people celebrated for their naiveté.
Meanwhile, people who did try to do something are suffering or have suffered the repercussions of doing that.
Is that a symptom of a bigger problem?
I feel like there’s an inability in society to grapple with that — and I don’t think it’s limited to tech, because you have these same problems in most corporate sectors.
As much as people want to say, “We should have whistleblowers,” we don’t have good protections for whistleblowers.
“It’s also very difficult for people who saw these problems early, and have always been critical, to hear this narrative and have people celebrated for their naiveté.”
And if you’re a whistleblower of any type of marginalized group in society, that could be ruin for you and your career.
Could someone who may have financially benefited from the harms of a problem be mobilized to help turn it around?
I don’t want to deter anyone from recognizing problems and using power when they have it to speak out, but we must think critically about who is given a microphone and who is rewarded.
People should be allowed to evolve — and that should be encouraged — but I don’t think we should champion people for delayed enlightenment.
Regarding the film’s thesis, what are the privacy harms of liberally posting one’s life on social media?
Part of the problem — which is a privacy concern, but also an equity and racial justice concern — is that once you put something out in the universe, you no longer have control over how it’s viewed and understood.
The term is “context collapse,” in that you can have subjective visions of how information is viewed.
That’s the concern: The control over one’s image or how one is viewed in society is lost.
That flow of data, how it’s used, how it’s repurposed, how it can be misinterpreted — are all the things that are concerning once something’s out in the universe.
Is there more or less equity in the social media-sphere?
Also, the benefits or the deficits that come from sharing information relate to who you are in society.
Some of us can monetize a video on TikTok — and then for others, it can be evidence that you may be an affiliate in a gang.
“As much as people want to say, ‘We should have whistleblowers,’ we don’t have good protections for whistleblowers.”
That can all depend on who you are, where you are and who you associate with — and that’s the problem.
That’s one of the many problems related to sharing of information.
What advice would you give regarding sharing online?
I’ll stick to Twitter: That’s an easy means of sharing information outside of one’s network that can’t necessarily be achieved by group emails.
The tradeoff to larger communication and a larger microphone is that you don’t control who gets to see that information.
It’s not only government, it’s people with adverse views. It could be counter-protesters like the Proud Boys showing up.
What could alleviate these issues?
That’s where more diversity of platforms could help.
Maybe there is a way to have some hybridized model of sharing information that is encrypted or protected in some way.
But in the same way that there are tradeoffs to tech innovation, people must think about what’s more important: Is mass communication and sharing more important than maintaining some level of privacy on an individual or group level?
You can DM and have more private messaging functions on a lot of social media apps, but the companies own that — and there aren’t really many restrictions that stop law enforcement from getting that data.
“Once you put something out in the universe, you no longer have control over how it’s viewed and understood.”
In fact, the main federal law that controls this was written in the 1980s — and most states don’t have this protection.
Cops can get a court order, which is really low-standard.
You can get anyone to sign off on that to get access to information.
Last summer, there were reports police checked social media to target protesters. Where does that fit in?
Currently, if you do post something on a public platform, there is no privacy right to that.
That’s accessible by anyone — and that also becomes something that’s hard to regulate.
If the information is being volunteered, it’s hard not only to say private actors can’t use that information — but also the government.
That’s where having better public education is necessary.
What has driven the divide between people obsessed with protecting their privacy versus those sharing liberally on social media?
It’s in part due to the lack of any type of tech literacy — and that people don’t see that there’s a tradeoff to everything, especially when dealing with digital platforms.
These services are free because they’re ad-based models.
People don’t think about why or how it’s possible for a company to provide free services to billions of people in a capitalist world.
“Is mass communication and sharing more important than maintaining some level of privacy on an individual or group level?”
Someone’s got to make money — and they’re public companies. What are their shareholders getting?
People understand that in a very abstract sense, but don’t think about it on an individual level: “What does this mean for me as an individual? What’s the downside to convenience?”
That is usually what people are seeking.
It’s easier to use an app — and if you don’t fully understand the ecosystem, you don’t tend to think: “Why is this free? Why is this available to me?”
Are there upsides to the problems?
Some of the complications with regulating and addressing Big Tech or disruptive technologies are that, in some ways, these technologies are amplifying and worsening a lot of structural inequalities.
At the same time, they can be providing some type of short, limited remedy to structural concerns that are otherwise not addressed by government or other players in society.
People understand, for example, that Facebook is mining their data but are not clear why that’s problematic?
The solution to Big Tech or technology generally, must be both government regulation but also cultural change — in that you can have antitrust enforcement, like we’ve had (recently) with Facebook.
Is that a way to create the platform diversity you mentioned?
If you have more competition, people have more choice.
In that case, Facebook’s data-collection and overall business model is only problematic — because in some ways they lack competition so, therefore, consumers lack choice.
“With the visibility of the business model that came from ‘The Social Dilemma,’ and the critical scholarship in this area — people see there’s a problem, but still don’t question use.”
But even if there were 10 different social media companies available to consumers, it still comes down to the individual questioning what value (they) actually get from Facebook.
So real cultural change is lagging?
With the visibility of the business model that came from “The Social Dilemma,” and the critical scholarship in this area — people see there’s a problem, but still don’t question use.
If you have people that don’t even question, then you don’t get changes in actions.
But if people started to question what benefit they get from any technology service, whether it’s social media or apps, that’s where you would start to get more on an individual — and, hopefully, a collective level — some questioning about the utility.
I’m not quite sure if that can happen at a community-wide level.
Next Monday: Privacy origins, civil rights and collective responsibility.
C.J. Thompson is a New York writer.
- Rashida Richardson: Rashida Richardson – About Me
- Maryland Law Review: Defining and Demystifying Automated Decision Systems
- Main Image: Credit: 7th Empire Media