By Robert Bateman
The EU is organizing the world’s first comprehensive legal framework to regulate the development and use of AI systems, but some experts have argued that the rules do not go far enough.
The proposed regulation, presented in draft form Wednesday by the European Commission, takes aim at biometric surveillance, “social-credit” systems and other controversial implementations of AI that do not conform with “EU values.”
The regulation would impose fines of up to 6% of annual turnover for companies that infringe its rules.
Yet some privacy and public policy experts told Digital Privacy News that, in its current form, the new law could fail to protect Europeans from many AI-driven harms.
“The commission has acknowledged that some uses of AI are very problematic for fundamental rights,” said Daniel Leufer, a Brussels-based policy analyst for the global human rights organization Access Now.

“They haven’t gone far enough.”
Daniel Leufer, Access Now.
“But they’ve left alone a lot of other very dangerous applications of AI, with very minimal obligations on them.
“They haven’t gone far enough,” Leufer told Digital Privacy News.
‘Negative Consequences’
In an explanatory memorandum accompanying the proposed regulation, the commission explained that AI could “bring a wide array of economic and societal benefits,” but that it also could produce “new risks” and “negative consequences” for individuals and society.
Leufer argued that the commission was “confused” about its overall objectives in proposing the regulation.
“The commission is trying to do two things, I think,” he began. “They’re trying to increase AI uptake — to have more AI in the EU.
“And simultaneously — and unfortunately, it seems, in a secondary position — they’re trying to protect fundamental rights,” he said.
“There’s a bit of a conflict between those two aims.”
Biometric Surveillance
In particular, Leufer noted the limited nature of a prohibition on biometric surveillance.

AI could “bring a wide array of economic and societal benefits,” though also producing “new risks” and “negative consequences” for individuals and society.
European Commission.
The proposed regulation prohibits the use of biometric identification systems in public spaces — but only where such systems are “remote,” operate in “real-time” and are used for law enforcement.
Leufer argued that this prohibition was “far too limited.”
“We support the idea of regulating AI systems,” he said. “But when you get down into the details, there are lots of concerns.”
Beijing Inferences
Lilian Edwards, chair of law, innovation and society at Newcastle University, said the “most controversial provisions” were rules barring “manipulative AI systems that cause individuals to act to their detriment,” “systems for ‘indiscriminate mass surveillance” and “social-scoring systems — akin to those seen in China.”
Edwards pointed out that advocacy groups had been critical of exemptions to these rules on the grounds of “state-authorized ‘public security’ purposes.”

“There seems to be no legal path for affected people or civil society organizations to challenge problematic AI systems or flag abuses.”
Karolina Iwańska, NGO Panoptykon Foundation.
“However, this misses the point that these still may have remarkable impacts on the private sector,” she told Digital Privacy News, indicating two areas in particular that could be affected by the regulation: manipulative cookies architectures (“dark patterns”) and “unethical targeted-ad scenarios.”
Edwards also noted that, unlike the EU’s General Data Protection Regulation (GDPR), the proposed AI rule “isn’t restricted to personal data.”
“This means that claiming, sometimes rather spuriously, that your system only deals with anonymous or aggregated data won’t get you out of the rules anymore,” she said.
“This is a huge leap forward.”
‘High-Risk’ AI Systems
Karolina Iwańska, a lawyer and policy analyst at Warsaw-based NGO Panoptykon Foundation, said the framework risked “becoming a superficial, bureaucratic process instead of a way to ensure that AI systems do not endanger fundamental rights.”
Iwańska criticized the regulation’s “risk-management” provisions, which would require developers of “high-risk” AI systems to test their impact before deploying them.

“This misses the point that these still may have remarkable impacts on the private sector.”
Lilian Edwards, Newcastle University.
“EU citizens subject to AI systems that could fire them or decide on their access to state benefits will essentially have to depend on self-assessments done by the very developers of these systems — which could only be examined by regulators after the fact,” Iwańska told Digital Privacy News.
“What’s more, there seems to be no legal path for affected people or civil society organizations to challenge problematic AI systems or flag abuses to regulators,” she said.
“There is a lot of room for improvement if the EU wants to make sure that AI is, indeed, used for good.”
Robert Bateman is a writer in Brighton, U.K.
Sources: