Want $400 From Facebook?

Tech Giant Reaches Settlement for Breaching Ill. Biometric Privacy Laws

By Patrick W. Dunne

Facebook may owe you as much as $400.

The social media giant recently settled a class-action lawsuit and agreed to pay out $650 million to Illinois residents for violating the state’s Biometric Information Privacy Act (BIPA), one of the strictest laws of its kind. 

Known as Patel vs. Facebook, the case initially was brought by a group of plaintiffs in the Northern District of California in San Francisco, alleging the company had violated BIPA, which was enacted in 2008.

In August 2019, the Ninth Circuit Court of Appeals, also in San Francisco, affirmed the district court’s recognition of Facebook’s harm, stating that the company could not violate privacy rights.

The settlement was reached in July.

According to the suit, Facebook collected users’ face scans to create templates for its tag suggestion feature that identified people in photos.

Illinois law, however, bars companies from collecting biometric information unless they obtain consent from users — clarifying how they’ll use such data and how long they will keep it.   

“Likely, Facebook viewed the risk of violation as less than the potential costs of compliance.”

Braden Perry, Kennyhertz Perry law firm.

“Facebook argued that users who were ‘tagged’ did not suffer harm and had shown no signs that the users would have done anything differently if consent had been obtained,” Braden Perry, a partner at the Kennyhertz Perry law firm in Kansas City, Mo., told Digital Privacy News.

“Essentially, it was a ‘no harm, no foul’ defense.

“Furthermore, it argued that users could turn off the tagging technology at their discretion,” Perry said. “Likely, Facebook viewed the risk of violation as less than the potential costs of compliance.”

However, a Facebook spokesperson retorted to Digital Privacy News: “The merits of this case were never adjudicated.

“To date, the proceedings have all been procedural, testing the validity of federal court standing and whether the case should be brought as a class action.

“Facebook had a number of strong legal and factual defenses that remained available at trial and on appeal, including that we have always been very clear with people about the facial-recognition services we offer and provided clear disclosures to help them understand the controls they have,” the representative said.

Nov. 23 Deadline

Now, affected users can receive $200 to $400 from Facebook provided they meet these conditions: They must have lived in Illinois for at least six months and have created and stored a face template after June 7, 2011.

“Facebook had a number of strong legal and factual defenses that remained available at trial and on appeal.”

Facebook.

“We focused on settling, as it was in the best interest of our community and our shareholders to move past this matter,” the Facebook spokesperson said. “Now that the court has approved the settlement, we’ve worked hard to ensure that people are able to learn about the settlement.

“The claims process is straightforward and user friendly, so it is easy for Facebook users to submit a claim if they are in the class and wish to do so,” she said.

Users must submit a claim by Nov. 23 to receive payment.

Biometrics and Big Tech

Patel vs. Facebook is one of many high-profile privacy cases against social media and big-tech corporations.

“This case represents the growing tension between social media and user rights,” Perry told Digital Privacy News.

“The Illinois law could be the catalyst needed to get technology companies to embrace the inevitable, get behind federal legislation and attempt to make it as advantageous to them as possible while protecting the privacy of consumers.”

The Federal Trade Commission fined Facebook $5 billion last year over eight separate privacy-related violations dating to 2012. The company agreed to notify users about face-scanning software better.

Earlier this year, three big-tech companies — IBM, Microsoft, Amazon — said they would not sell facial-recognition technology to police and urged the federal government to regulate it.

While these concerns raised some eyebrows, many privacy advocates told Digital Privacy News, they worried about tech companies’ motivations and expressed concerns that such invasive technology would negatively impact marginalized communities. 

While Microsoft said it never sold face software to authorities in the first place, Amazon promised to halt its sales only for one year.

Neither company said their pledges applied to such U.S. government agencies as the Department of Homeland Security.

Others noted that the announcements excluded items like Amazon’s Ring doorbell cameras, which were found to share footage with police departments and other third parties and to be vulnerable to hackers.

IBM, however, completely withdrew from the market, claiming such technology should not be used for discrimination or racial injustice.

Other Face Software Makers

But other facial-recognition tech companies include Clearview AI, which scrapes and harvests pictures from social media and sells them to law-enforcement agencies. The company also was the target of two recent BIPA-related lawsuits.

Clearview also had claimed that it only sold technology to law-enforcement agencies — at least 2,200 of them — but a February hack revealed that other companies using the software included Macy’s, Bank of America, Walmart and Eventbrite.

Facial recognition has a number of applications, supporters contend, including securing mobile devices, preventing those on “do not fly” lists from boarding airplanes and fighting crime.

But it must be regulated, critics argue.

“Biometrics are increasingly used for verifying a person’s physical characteristics to verify identity,” Perry told Digital Privacy News. “It’s powerful because — unlike passwords, credit-card numbers or bank account information — biometrics do not change.”

Wide Latitude

Lack of regulation also gives companies wide latitude in how they use such data — and the information is inherently prone to hacking.

“Posting photos could lead to identity theft,” Perry said. “If you include additional information, such as names or birth dates, your security may be compromised by wrongdoers.

“Hackers take every opportunity to exploit high-profile targets — and social media data storage is a prime target.

“The result of having data compromised upon collection, storage or transfer is using that data to unlock a multitude of potentially sensitive information.”

Finally, facial-recognition software has been found to be rife with misidentification issues, particularly regarding people of color. It can lead to mistaking for criminals or other people.

“In the end,” Perry told Digital Privacy News, “there will be sweeping legislation — and companies should prepare to address their privacy policies and ensure that customers understand how their data is being used and for what purposes.”

Patrick W. Dunne is a writer based in San Francisco.

Sources (most external links):