Press "Enter" to skip to content

A Dizzying Year in Privacy: From Antitrust to a Lack of Trust

By Jackson Chen

Despite the world being disrupted in an unprecedented manner in 2020, the privacy world still saw many significant events and developments.

Early this year, COVID-19 led to privacy concerns over rushed contact-tracing apps and data breaches at overtaxed health care operations.

Nearly halfway into 2020, the European Union evaluated the effectiveness of its General Data Protection Regulation (GDPR), while a wave of racial-justice protests in the U.S. reinvigorated concerns about facial-recognition technology.

To cap off the year, Congress held several hearings with Big Tech CEOs, while many regulatory actions took place against them by federal and state governmental agencies.

Throughout the year, a vast array of companies were hit with data breaches — and U.S. politicians put forth a patchwork of privacy laws.

With such a dizzying year, Digital Privacy News reached out to experts for their insights on the biggest privacy developments for 2020.

Rising Congressional Pressure on Big Tech

The U.S. Congress called on CEOs from major tech giants three times to testify on matters that included anticompetitive practices, platform immunity and content moderation.

The pressure to rein in Big Tech from Congress came to a head in July with a hearing before the Democratic-controlled House Judiciary Committee’s antitrust subcommittee hearing — and later sessions with members of two Republican-dominated Senate panels: the Commerce Committee in October and the Judiciary Committee the following month.

“Policymakers … have a better understanding on how the digital economy works and understanding that there need to be guardrails that protect consumers.”

Tom Wheeler, Brookings Institution.

“2020 was a watershed year in which the Big Tech strategy of getting to make rules for the digital economy came under bipartisan attention in the Congress for the first time,” Tom Wheeler, a visiting fellow in governance studies at the Brookings Institution in Washington, told Digital Privacy News.

“Policymakers have previously shied away from dealing with the issues of Big Tech, but I think they have a better understanding on how the digital economy works and understanding that there need to be guardrails that protect consumers.”

Major US Antitrust Lawsuits

With mounting legislative pressure, federal agencies also looked to scale back Big Tech’s reach.

The Justice Department filed an antitrust lawsuit against Google in October, while the Federal Trade Commission (FTC) sued Facebook for illegal monopolization earlier this month.

The commission’s action was accompanied by an antitrust lawsuit against Facebook brought by 48 Republican and Democratic state attorneys general, led by New York Attorney General Letitia James, a Democrat.

All the actions mostly are related to anticompetitive practices, but they also could impact how Google and Facebook handle user privacy. 

“Some of the rhetoric surrounding (these lawsuits) does try to address the larger policy issues with the tech titans,” said Mark MacCarthy, a senior fellow at the Institute for Technology Law and Policy at the Georgetown University Law School.

“Letitia James, the New York state attorney general, does say one of the problems is that Facebook has degraded the privacy for millions of Americans — and that’s one of the reasons she wants to use antitrust law.”

“Some of the rhetoric surrounding (these lawsuits) does try to address the larger policy issues with the tech titans.”

Mark MacCarthy, Georgetown University Law School.

MacCarthy added that these actions signaled that antitrust officials were interested in addressing larger policy concerns — but he noted that a small likelihood existed that the legal actions would produce any major changes with privacy intrusions or content moderation. 

Europe Also Takes On Big Tech

On the other side of the Atlantic, the European Union also has pursued its latest action against Big Tech.

The EU filed antitrust charges against Amazon last month, claiming the online retailer accessed data from sellers on its platform to gain an unfair market advantage.

The challenge could bring potential fines of up to $28 billion, or 10% of Amazon’s annual worldwide income.

Shubha Ghosh, director of Syracuse University’s Intellectual Property Law Institute, told Digital Privacy News that the news of the EU’s move was not surprising, as many had been wary of the company’s growth for some time. 

“Competition law is not about data protection; it’s largely about affecting consumer demand and price.”

Shubha Ghosh, Syracuse University.

“The first thing to point out is that competition law is not about data protection; it’s largely about affecting consumer demand and price,” Ghosh noted. “Having said that, one of the interesting things about these internet markets is that data operates like price.”

Ghosh explained that giving up data could sometimes be the price of an online service, but he added that he remained unsure if any regulatory agency could stretch antitrust definitions to fit the digital world. 

GDPR’s First Evaluation 

After the EU’s GDPR took effect in May 2018, the breakthrough privacy law had its first required evaluation in June.

The evaluation found that GDPR achieved most of its intended objectives but that more could be done regarding enforcement. 

“Thanks to the EU’s data-protection law, consumers have a defense against companies that hoover up their data and watch their every step online,” Johannes Kleis, communications director at the European consumer rights organization BEUC, told Digital Privacy News.

“But the current delay with rendering decisions in important cases based on the so-called ‘one-stop-shop’ mechanism, which is triggered in case of EU-wide infringements, is deeply concerning.”

“Thanks to the EU’s data-protection law, consumers have a defense against companies that hoover up their data and watch their every step online.”

Johannes Kleis, BEUC, European consumer rights organization.

Kleis warned that the inefficiency and ineffectiveness of enforcement actions could lead to consumers not trusting that GDPR could effectively protect their data and privacy rights. 

California Voters Back Proposition 24

Proposition 24, also known as the California Privacy Rights Act, was approved by more than 56% of Californians in last month’s election.

The law expands the existing California Consumer Privacy Act, which took effect July 1, by preventing businesses from sharing personal information, by limiting businesses’ use of certain customer information and by establishing a state privacy regulatory agency. 

Lee Tien, legislative director for the Electronic Frontier Foundation (EFF), told Digital Privacy News that Prop 24 didn’t go into effect until 2023, meaning few changes to consumer privacy until then.

“We think that understanding where your information goes is important.”

Lee Tien, Electronic Frontier Foundation.

As EFF neither supported nor opposed Prop 24, Tien said the organization wanted the law to allow consumers to see where their personal data was going to and coming from.

“We think that understanding where your information goes is important,” Tien said. “A part of better privacy is ensuring that consumers will always be able to know what is the base of their personal information and who knows these things about you.”

COVID and Contact-Tracing

With COVID-19 spreading at an alarming rate this year — hitting more than 72 million cases so far and 1.7 million deaths, according to the World Health Organization — many countries looked to containing transmission of the virus through various methods, including contact-tracing.

However, countries like Bahrain, Kuwait and Norway initially proposed invasive apps, under the guise of public health, according to Amnesty International.

Rasha Abdul-Rahim, deputy director of Amnesty Tech, told Digital Privacy News that these three countries’ contact-tracing apps closely resembled mass-surveillance tools that were designed with live-tracking or near-live tracking of user locations. 

“A lot of governments started to rush out contact-tracing apps that were pretty badly designed without giving enough thought to peoples’ rights and freedoms,” Abdul-Rahim said, adding that some governments since have corrected their apps.

But many other countries had to decide between using contact-tracing software developed by Apple and Google that offered a decentralized approach, where users maintained their data, or had to create their own less-privacy-focused alternatives — with all the data centralized.

“A lot of governments started to rush out contact-tracing apps that were pretty badly designed without giving enough thought to peoples’ rights and freedoms.”

Rasha Abdul-Rahim, Amnesty International.

The U.K., for instance, initially sought to create their own centralized contact-tracing app — as the National Health Service felt it could gain more insight into the spread of the pandemic despite the accompanying privacy concerns — but months later adopted the Apple-Google method.

Even with the rollout of vaccines in several countries, contact tracing still is expected to be used as a preventative measure against COVID-19’s spread.

Data Breaches More Common

Data breaches have been trending upward in the last five years, according to David Opderbeck, co-director of Seton Hall University’s Gibbons Institute of Law, Science and Technology.

In January, a data breach at Marriott International affected the personal data of approximately 5.2 million guests. A 2018 hack affected up to 500 million guests of its Starwood subsidiary, which the Maryland-based company purchased two years earlier.

Another high-profile breach occurred this summer, when a Florida 17-year-old allegedly masterminded the hacking of the verified Twitter accounts of such high-profile individuals as former President Barack Obama, Amazon CEO Jeff Bezos and rapper Kanye West.

In another large data breach, the profile information of nearly 235 million users of TikTok, Instagram and YouTube was compromised in August. 

“Many of the commercial breaches you see are often related to simple things.”

David Opderbeck, Seton Hall University.

Opderbeck told Digital Privacy News that a number healthcare centers were hit with ransomware attacks because they were vulnerable from dealing with the coronavirus pandemic.

Additionally, Opderbeck noted that a leading cybersecurity company, FireEye, was hacked earlier this month — allegedly by nation-state actors linked to Russia — in order to obtain its penetration-testing tools that are typically used to find weaknesses or vulnerabilities in digital networks.

“The marketplace for cybercriminals is getting more and more sophisticated to the point where anyone with a minimal degree of knowledge can easily buy a turnkey toolkit on the dark web,” Opderbeck explained.

“People are more aware of it, but many of the commercial breaches you see are often related to simple things like not patching or not being aware of exploits that everyone else is aware of.”

Zoom’s Rise Amid COVID

The videoconferencing company Zoom Communications Inc. saw a fast rise by filling a major need during the pandemic.

However, the rapid expansion exposed many security flaws, including “Zoombombing,” a weak privacy policy and poor encryption.

“Companies tend to ignore things like security in the beginning because they’re new and are trying to get their product to work,” Bruce Schneier, a fellow at the Belfer Center at Harvard’s Kennedy School of Government, told Digital Privacy News.

“Their security was behind what it should’ve been for a company that became what it was in March, so they had to scramble.”

To battle the rapidly growing privacy issues, Zoom upgraded it encryption standards in April, incorporated two-factor authentication in September and rolled out end-to-end encryption in October.

Schneier said he now had more confidence in using the software.

“Companies tend to ignore things like security in the beginning because they’re new and trying to get their product to work.”

Bruce Schneier, Harvard University.

“They made a lot of effort in raising the level of security — and I’m really impressed with what they did,” he said. “They solved their ‘Zoombombing’ problem, made their encryption better and put in a lot of security features.”

Protests Over Facial Recognition

Facial-recognition technology came under fire, most clearly after news reports that it was being used by law enforcement to identify protestors during demonstrations following George Floyd’s death in the custody of Minneapolis police in May.

Reports also ranged from Miami authorities identifying a woman who allegedly threw a rock at police during protests in Florida to New York Police Department officers arriving at the front door of a demonstrator who was accused of blaring into an officer’s ear with a megaphone.

“At the end of the summer, we were calling for a ban of facial-recognition technology, along with the development, use and export of these mass-surveillance devices,” Matt Mahmoudi, a researcher-adviser on artificial intelligence and human rights at Amnesty International, told Digital Privacy News.

 “Part of why we did that is precisely because these devices stand to be in violation of the freedom of assembly and the right to protest.”

Following the demonstrations, three large technology companies — Microsoft, IBM and Amazon — moved to restrict police access to their face software, with Amazon and Microsoft calling on Congress to address the issue.

“At the end of the summer, we were calling for a ban of facial-recognition technology, along with the development, use and export of these mass-surveillance devices.”

Matt Mahmoudi, Amnesty International.

But many local police departments and agencies still use some form of the technology — and other cities, including San Francisco, Boston, Portland, Ore., and Portland, Me., have banned law-enforcement use of the technology.

Mahmoudi pointed out that the Oregon ban was the most comprehensive, barring the use of face technology by private companies or government agencies. 

Calls for a US Privacy Law

With privacy touching several sectors — including legislative, tech and human rights — the desire for improved privacy standards has grown over the years.

Many legislators and privacy experts, like Wheeler of the Brookings Institution, have called for a federal privacy law that would comprehensively detail and protect consumer privacy rights.

This way, companies would no longer benefit from the collection of their users’ personal information.

“The rights of consumers are not protected because of the absence of a federal policy,” Wheeler told Digital Privacy News.

“As a result, we end up looking to Europe, who set the rules with the GPDR, or California with the CCPA — but what we need are a national set of rules that protect the privacy of consumers.”

But EFF’s Tien noted some of the difficulty in crafting such a national restriction.

“Everyone’s going to want to get it right, so there’s no risk for me that a properly legislated negotiated bill is going to be hasty,” he told Digital Privacy News.

“I think part of the problem is going too slow — and that we’ll never have change or reform.”

Jackson Chen is a writer in Groton, Conn.

Sources (all external links):