Press "Enter" to skip to content

Q&A: Alan Butler, Electronic Information and Privacy Center

‘It’s Not Really the Place of the Government to be Monitoring What People Are Saying on Social Media’

By Vaughn Cockayne

Alan Butler is the executive director of the Electronic Information and Privacy Center (EPIC) in Washington.

He joined in 2011 and originally wrote amicus briefs in privacy cases of interest to EPIC before he was named executive director earlier this year.

In one case, Riley v. California in 2014, the U.S. Supreme Court unanimously held that the warrantless search and seizure of digital contents of a cellphone during an arrest was unconstitutional. Chief Justice John Roberts cited Butler’s brief in writing for the court.

Four years later, Butler wrote a brief in another landmark Supreme Court case, Carpenter v. United States, in which the justices ruled 5-4 that the Fourth Amendment protected citizens against warrantless searches of cellphone records that tracked individual movements.

A graduate of the UCLA School of Law and Washington University in St. Louis, Butler told Digital Privacy News that Congress had yet to make major decisions on privacy law — and that EPIC hoped to be a part of the process.

This interview was edited for length and clarity.

What are some of the projects EPIC has been working on since you became executive director?

I am excited about a number of initiatives.

We have been doing really important work over all our issue areas: around consumers and internet privacy, around surveillance issues and human rights issues surrounding AI.

We have also really stepped up our investigative work around systems that impact all sorts of different internet users.

“AI systems themselves can have direct privacy impacts, in that they involve automated data-based decision-making processes that impact the individual autonomy of a person.”

Last year, we launched a number of significant investigations, most prominently into remote test-proctoring systems that include AI data-collection practices that we believed were unfair and deceptive.

More recently, we also led a coalition push to highlight the problems on COVID-19 vaccine registration.

These systems were essentially guiding users to register for profiles with commercial pharmacies as a necessary part of getting access to these life-saving vaccines — which we believe to be just wrong.

What is the Amicus Program?

The point of the Amicus Program is to really take a proactive role in advocating for privacy protection as the law develops in federal and appeals courts — with a focus particularly on what happens in the U.S. Supreme court, state supreme courts and lower appellate courts as well.

We try to get as comprehensive of a view as we can of the major cases that are trending in the court system.

We are looking for cases that we believe speak to, or will speak to, emerging privacy-law issues in the constitutional space and the civil regulatory space or in other elements.

Please explain the AI and Human Rights Project. What is the connection between artificial intelligence and privacy rights?

AI systems leverage and rely on large data sets.

And, to the extent that AI is driving the collection of personal data, there are significant implications on that trend for the need for privacy law.

We also believe AI systems themselves can have direct privacy impacts, in that they involve automated data-based decision-making processes that impact the individual autonomy of a person.

This involves if a person is eligible for benefits, or the punishment they receive as a result of their actions — or even decisions about them, like their hireability or fireability.

Photo: The Wall Street Journal

The Supreme Court in the Carpenter case “reached the conclusion we wanted: ‘You need to get a warrant to track people’s location.’”

Whether or not a child should be flagged as having cheated on an exam.

It really speaks to the digital-autonomy interest at the heart of data-protection law.

Though, obviously, AI has human rights implications that go beyond the scope of that. 

You filed an amicus brief in Riley v. California case, writing in 2014 that the Supreme Court’s decision would significantly affect privacy rights. Seven years later, do you believe you were right?

It has, and there are still a number of threats that I identified in that paper that are still being borne out.

Like what?

The status of the application of how the Riley concept will play out at the border. There are cases involving rules around cellphone data-extraction tools in a number of state supreme courts.

But really the biggest proof of the impact of Riley was the Carpenter decision. Carpenter is the post-Riley decision.

To be honest, before that I wasn’t sure they would take on the question of limiting location tracking through provider data because of the history with third-party doctrine — but they did.

They reached the conclusion we wanted: “You need to get a warrant to track people’s location.”

There’s still a lot of lingering questions post-Carpenter now, but those two decisions in succession laid a marker from the courts for where they think this needs to go.

Regarding the Supreme Court, its composition has changed in recent years? Do you believe these changes will affect privacy rights and the future of the commodification of data?

It is one of the things we constantly think about.

One of the interesting things to note is that, over the years, if you think about privacy broadly, it has been an issue that doesn’t typically have a partisan valiance.

In fact, Justice (Antonin) Scalia wrote some of the most privacy-protective Fourth Amendment decisions that were around until more-recent decisions.

I think there is an alignment on privacy.

Even on an issue like Article III standing, which was seen as more of a partisan issue, Justice (Clarence) Thomas actually has been outspoken in the court in recent years in outlining his views about the correct approach and interpretation of that.

Turning to more specific issues, Virginia effectively will ban facial-recognition technology on June 1. What does this mean for the state of privacy activism? The use of privacy technology?

There has been a major effort by EPIC and many other organizations over the past few years to ban the use of facial recognition.

There’s been a lot of focus at the state level. Virginia is just one example of the success at that level.

“General-purpose monitoring by an agency like the Postal Service … is mind-boggling and raises questions about its legitimacy as a policy.”

It’s incredibly important work, especially in light of the revelations of the inherently biased nature of facial-recognition technology — but also the role that the technology plays in fundamentally undermining the rights of private citizens. 

The U.S. Postal Service has been reported to be tracking social media posts. EPIC put out a statement against this. Should this be surprising to people that USPS is monitoring them? 

It was certainly surprising to us to find out that the post office was engaged in social media monitoring.

We have been reporting on social media monitoring for many years.

It’s not really the place of the government to be monitoring what people are saying on social media, especially when it is well outside of their scope.

Obviously, there are going to be certain specific circumstances where in a particular criminal investigation it might be relevant.

But general-purpose monitoring by an agency like the Postal Service, which only has a pretty-limited law enforcement jurisdiction, is mind-boggling and raises questions about its legitimacy as a policy. 

Now that the Florida privacy bill has failed, where do you see the defeat in the context of the broader fight for privacy protections nationwide? 

I see the Florida effort as one of the most recent exchanges in the ongoing policy debate on how to deal with privacy policy in the U.S.

This is a battle that is being fought in the states right now, with an eye towards what will happen in Congress towards the future.

There needs to be a federal, comprehensive data-protection law.

We want to preserve the ability of the states to build upon a federal baseline — but we need to set the federal baseline, because it shouldn’t be the case that only people in California have privacy rights, for example.

We are advocating in part for the strongest privacy protections states can pass. But the overarching goal is to establish comprehensive data-protection that can actually be enforced.

President Joe Biden is being urged to create a disinformation task force. Republicans and Democrats are proposing antitrust legislation to break up Big-Tech — and CEOs are being questioned regularly before congressional panels. Will these issues get in the way of real data-privacy legislation coming out of Washington? Are they, essentially, “smokescreen issues”? 

It will be very interesting to see how the dynamics play out. There’s a lot of interest in Congress stepping in, because they have sat on their heels for so long.

You can look back in history and see that, as early as 1999, even the Federal Trade Commission was saying that the industry wasn’t going to regulate itself and that Congress needed to step in.

There was actually a bipartisan effort in 2000 to pass privacy law, but it ran into a wall with 9/11.

It has been coming for a long time — and, frankly, it’s something Congress is way behind on.

Our existing privacy laws are horrendously out of date — and everyone recognizes that Congress needs to act, including members of Congress.

“There needs to be a federal, comprehensive data-protection law.”

The question is whether or not they can build consensus or they get tripped up on any of the particulars of the issue as it develops.

But we are going to continue to advocate for strong comprehensive privacy reform and for a Data Protection Agency in the United States.

Vaughn Cockayne is a writer in Washington.

Sources: