Daily Digest (7/3)

Facebook Admits to Improperly Giving User Data to Third Parties, Again; Detroit Police Chief Admits to 96% Facial-Recognition Error Rate; Florida Is First in US to Block Insurers From Using Genetic Data; Researchers: Chinese Mobile Surveillance of Muslims More Pervasive Than Once Thought. Click “Continue reading” below.

Facebook Admits to Improperly Giving User Data to Third Parties, Again

Facebook said Wednesday that thousands of third-party developers continued to receive updates to users’ private information well beyond the point when they should have.

The company said that, for an unspecified number of users, it failed to cut off the data trail — as promised in 2018 — 90 days after a person had last used an app.

In a blog post, Facebook said that the user data possibly involved email addresses, birthdays, language and gender — and that it was sent to as many as 5,000 apps after the 90-day threshold. 

“We discovered that in some instances apps continued to receive the data that people had previously authorized, even if it appeared they hadn’t used the app in the last 90 days,” Konstantinos Papamiltiadis, Facebook’s vice president of platform partnerships, wrote in the blog.

“For example, this could happen if someone used a fitness app to invite their friends from their hometown to a workout, but we didn’t recognize that some of their friends had been inactive for many months,” he said.

However, Mashable.com queried Facebook on how many users had their data improperly sent to third-party apps and when the company discovered the error.

Facebook did not immediately respond to the queries, however.

Sources (external links):

Detroit Police Chief Admits to 96% Facial-Recognition Error Rate

Detroit Police Chief James Craig acknowledged the flaws with its facial-recognition software earlier this week after the ACLU filed a complaint with the department after a Black man was wrongly arrested based on the faulty software.

“If we would use the software only (to identify subjects), we would not solve the case 95-97% of the time,” Craig said, Ars Technica reports. “That’s if we relied totally on the software, which would be against our current policy.

“If we were just to use the technology by itself, to identify someone, I would say 96% of the time it would misidentify.”

The American Civil Liberties Union filed a complaint with the department on behalf of Robert Williams, who was wrongfully arrested for stealing five watches worth $3,800 from a luxury retail store.

Investigators first identified Williams via a facial-recognition search, but under questioning, Williams pointed out that the grainy surveillance footage did not actually look like him.

Police lacked other evidence tying Williams to the crime, so they let him go.

The ACLU has called on the Detroit Police Department, and other police agencies, to stop using the technology for investigations because of its high error rate and racially disparate impact.

Source (external link):

Florida Is First in US to Block Insurers From Using Genetic Data 

Florida has become the first U.S. state to enact a DNA privacy law, barring life, disability and long-term care insurance companies from using genetic tests for coverage purposes.

Republican Gov. Ron DeSantis signed the legislation into law Wednesday, The Center Square reports.

Sponsored by GOP state House Rep. Chris Sprowls, the law now extends federal prohibitions against health insurance providers accessing results from DNA tests, such as those offered by 23andMe or AncestryDNA, to the other three insurance categories.

The bill was adopted by the House, 110-0, and by the Senate, 35-3.

“Given the continued rise in popularity of DNA testing kits,” Sprowls said, “it was imperative we take action to protect Floridians’ DNA data from falling into the hands of an insurer, who could potentially weaponize that information against current or prospective policyholders in the form of rate increases or exclusionary policies.”

Federal law prevents health insurers from using genetic information in underwriting policies and in setting premiums, but the prohibition does not extend to life, disability or long-term care coverage.

Source (external link):

Researchers: Chinese Mobile Surveillance of Muslims More Pervasive Than Once Thought

Newly mobile-hacking tools have added to the extensive picture of Chinese government surveillance of the country’s Uighur Muslim minority.

Like Android-focused surveillance kits before them, the malicious software can steal sensitive data on targeted phones and turn them into listening devices, according to mobile security firm Lookout, which made the discovery.

The disclosure was reported by CyberScoop.com.

Some of the hacking tools have been in use for more than five years, but Lookout linked them into a vast spying effort tied to the Chinese government, underscoring the pervasive nature of the surveillance and the challenges of uncovering all of it.

“Our research found that there are eight malware families meant to stealthily spy on this ethnic minority at the minimum, with some of them expanding even more broadly in their targeting,” Kristin Del Rosso, Lookout’s senior security intelligence engineer, told CyberScoop.

One of the malware families was revealed in a 2013 report from the University of Toronto’s Citizen Lab.

For years, cybersecurity analysts have uncovered similar code and showed how it was critical to Chinese efforts to surveil the Uighurs, a Turkic-speaking Muslim minority who live in Xinjiang province in northwestern China.

The Chinese government has detained more than 1 million ethnic minorities, many of them Uighur Muslims, in prison camps under the guise of “counterterrorism” and security — repression denounced by human-rights groups, CyberScoop reports.

Source (external link):

— By DPN Staff