Facebook’s AI Breakthrough Carries Privacy Risks, Experts Warn

By Robert Bateman

Facebook recently claimed it had achieved a “paradigm shift” in AI development after training its algorithm on a billion Instagram photos.

But experts in privacy and AI ethics told Digital Privacy News that they were concerned about Facebook’s lack of transparency and proper risk assessment.

Facebook’s SEER (self-supervised) AI can correctly identify images 84.2% of the time after processing one billion “random, public, and non-EU images from Instagram,” according to a March 5 research paper released by company.

While most AI development relies on using labeled images to help algorithms distinguish among objects, self-supervised AIs are trained on unlabeled images.

In a blog post on the day the paper was released, Facebook’s chief AI scientist, Yann LeCun, called self-supervision “the dark matter of intelligence.”

He claimed self-supervising could help AI “glean a deeper, more nuanced understanding of reality” and could ultimately bring it “closer to human-level intelligence.”

‘Extremely Opaque’

But observers voiced fears about the risks to user privacy from Facebook’s AI program.

“Facebook is extremely opaque about how it develops and deploys algorithms,” said Amy Brouillette, research director at Ranking Digital Rights, a Washington-based group that evaluates digital platforms’ respect for user rights.

“Facebook does not publicly commit to protect users’ expression and privacy rights as it trains and deploys algorithms,” she said. “Nor does it conduct risk assessments about the impact of these systems on freedom of expression and privacy.” 

Facebook did not respond to a request for comment from Digital Privacy News.

“Facebook does not publicly commit to protect users’ expression and privacy rights as it trains and deploys algorithms.”

Amy Brouillette, Ranking Digital Rights.

“Before rolling out any additional AI features and programs, Facebook should take concrete steps to improve its governance and oversight of these systems,” Brouillette said. 

She also called on Facebook to publish policies regarding the impact of its AI systems on user privacy and human rights, be transparent about its use of personal data in training its algorithms and offer users ways to opt out of its AI programs.

Complaint Against Clearview

Matthias Marx, a member of the Hamburg-based hackers’ collective Chaos Computer Club, told Digital Privacy News: “People shared images on Instagram.

“Now, one billion of these images have been used to demonstrate that a new method of labeling images has better accuracy over other techniques,” he said.

SEER can help AI “glean a deeper, more nuanced understanding of reality.”

Yann LeCun, Facebook.

“Millions of parameters are involved in this kind of visual feature detection. The parameters describe the contents of the pictures. 

“If the pictures show people, these parameters include information about the people: about their faces, hair, skin color or other biometric data,” Marx said.

He is experienced in challenging companies about their use of biometric data. In February, Marx’s complaint against New York facial-recognition firm Clearview AI was heard by the Hamburg data-protection authority. 

The regulator determined that Clearview’s extensive biometric database, which the company extracted from social media photos without consent, was illegal in the EU.

“Biometric data are particularly sensitive,” Marx explained. “They describe unchangeable and unique characteristics of our bodies and behavior.

“Like our fingerprints, we cannot simply change the associated characteristics. 

“Biometric data are particularly sensitive. … Like our fingerprints, we cannot simply change the associated characteristics.”

Matthias Marx, Chaos Computer Club.

“However, processing biometric data is often error-prone and discriminatory,” he said.

“The way the data is processed is often incomprehensible to third parties. Combined with a belief in technology’s ability to solve all problems, this poses immense risks.”

GDPR’s Impact

Facebook has not explained why it excluded EU-based users from its dataset. Some observers have speculated that this is because of the EU’s strict privacy laws — in particular, the General Data Protection Regulation (GDPR), enacted in 2016.

Nani Jansen Reventlow, founding director of the Digital Freedom Fund in Berlin, cited the Hamburg authority’s Clearview decision as an example of the EU’s comparatively strict data-protection regime.

“Facebook may well have wanted to avoid this kind of trouble for itself by steering clear of the EU altogether,” she told Digital Privacy News.

Photo: Tetsuro Miyazaki

“Instagram users actually did consent to Facebook doing an awful lot with their data: Any information and images they upload can be licensed, replicated and modified.”

Nani Jansen Reventlow, Digital Freedom Fund.

Jansen Reventlow said the EU’s higher standard of consent might deter some companies from undertaking the mass collection of personal information in the EU.

“Upholding the claim that you’ve obtained explicit, informed consent to use data that may be classified as sensitive data becomes tricky when what you do is essentially a big data scrape,” she said.

“That being said, Instagram users actually did consent to Facebook doing an awful lot with their data: Any information and images they upload can be licensed, replicated and modified.

“This won’t necessarily protect the company when creating a dataset this large from its social media sites, though,” she said.

‘Material Impact’

Paolo Balboni, professor of privacy, cybersecurity and IT contract law at Maastricht University, said that Instagram’s decision “implies a material impact of the GDPR on the protection of the right to privacy.

“EU data-protection authorities have issued more than 500 administrative penalties that we are aware of to date,” Balboni told Digital Privacy News. “Some of these administrative sanctions may have impacted Facebook’s decision to exclude EU users’ photos.”

But Balboni does not believe that the protection of people’s privacy is incompatible with innovation in fields like AI.

“GDPR can provide a solid basis to develop a sustainable digital economy and society.”

Paolo Balboni, Maastricht University.

“It is important to underline that the scope of the GDPR is to drive innovation in full respect of EU fundamental rights and freedoms, in line with the key concept of data-protection by design,” he said.

“In this respect,” Balboni continued, “I strongly believe that the GDPR can provide a solid basis to develop a sustainable digital economy and society.”

Robert Bateman is a writer in Brighton, U.K.

Sources: