By Robert Bateman
Facebook content moderators have published an open letter to CEO Mark Zuckerberg and other executives, accusing the company of risking their lives to “maintain Facebook’s profits during the pandemic.”
The Nov. 18 letter, signed by more than 200 content moderators worldwide and coordinated by U.K. legal nonprofit Foxglove, argued that moderators had been “forced” back into the office without hazard pay to perform work that is “psychologically toxic.”
Facebook’s moderators manually examine and remove “masses” of harmful content every day, including images of “violence, hate, terrorism, child abuse, and other horrors,” and “bear the brunt of the mental health trauma associated with Facebook’s toxic content,” the letter said.
The moderators called on Facebook to provide them with hazard pay, “real” health care and psychiatric care, and the opportunity to work from home whenever feasible, among other demands.
Facebook did not respond to a request for comment from Digital Privacy News.
Experts called content moderators a crucial front in the fight against online abuse and said that they performed vital work that machines could not do.
“Platforms are becoming, increasingly, the front lines of what used to be done by the police,” Ben Collier, lecturer in digital methods at the University of Edinburgh, told Digital Privacy News. “They’ve become the front lines of questions about justice, proportionality and democracy.
“This is the type of work that platforms have always relied on,” he added. “Whether it’s Facebook or your local newspaper — they all need content moderation.
‘Invisible Work’
Collier said the Facebook moderators were “completely within their rights to organize.”
“This is sometimes called ‘invisible work’ — work that’s ignored, hidden away, and where the people doing it are quite badly treated,” he argued.
“That’s not true of all content moderators — there are some companies that are actually quite good at this.”
Collier contrasted Facebook’s approach with that of the online communication platform Discord, which he said was a good model for content moderation.
“(Discord’s moderators) are well-paid, highly-skilled — and there are much smaller numbers of them,” he said. “They’re in-house, and they’re given a lot more power and freedom to shape their own work.

“Platforms are becoming, increasingly, the front lines of what used to be done by the police.”
Ben Collier, University of Edinburgh.
“They’re also not forced to look at content they don’t want to look at.
“It’s actually considered to be a valued, skilled profession — in the same way that an engineer, a data scientist or a policy worker is,” he said.
Different Challenges
Collier acknowledged, however, that the challenges of content moderation at Discord were different: The platform is invite-only, smaller in scale and does not have public-facing content.
“Discord has also managed to develop an approach that works really well, without relying on AI,” he said.
The moderators’ letter recalled how Facebook turned to AI at the start of the pandemic to try to automate its content-moderation processes — without success.
“(Facebook) sought to substitute our work with the work of a machine,” the letter said. “Without informing the public, Facebook undertook a massive live experiment in heavily automated content moderation.
“The AI wasn’t up to the job.”
‘Instrumental’ Tool
In the companion guide to its latest transparency report, also released Nov. 17, Facebook said that automation was “instrumental” in its efforts to remove harmful content but that it was “still a long way off from (automation) being effective for all types of violations.”
Collier argued, however: “You can use machine-learning for really easy stuff. But you need a human to make context-sensitive judgments.
“Some things can be obviously not OK in some contexts but completely fine in others,” he explained. “For example, the way communities speak about themselves — using reclaimed slurs, for example — terms that could be abusive in one context could be affectionate in another.
“It’s very difficult for AIs to capture that.”
Shagun Jhaver, an affiliate in Seattle of the Berkman Klein Center for Internet & Society at Harvard University, researches social computing and human-computer interaction.

“Deploying such systems without keeping any human moderators in the loop may disrupt the transparency and fairness in content moderation that so many users value.”
Shagun Jhaver, Harvard University.
He told Digital Privacy News about the challenges of using AI to moderate online content.
“While automation is crucial to support the growth of social media giants like Facebook, accurate automated detection is not an easy task,” Jhaver said.
“It is arguably impossible to make perfect automated-moderation systems because their judgments need to account for the context, complexity of language and emerging forms of obscenity and harassment.”
‘Gray Areas’
Jhaver highlighted the difficulties in using automation to enforce different sets of policy guidelines.
“Moderating some content involves making decisions on the gray areas in platform policies that require subjective interpretation,” he observed. “Such policies are harder to implement using automated tools.
“Further, most AI systems currently cannot provide specific reasons for why each post was removed,” Jhaver added.
“Therefore, deploying such systems without keeping any human moderators in the loop may disrupt the transparency and fairness in content moderation that so many users value.”
But he insisted that there was a role for automation in Facebook’s moderation processes: developing tolls that integrate into its moderators’ workflows.
The company’s focus, Jhaver suggested to Digital Privacy News, should be on determining “when automated tools should remove potentially unacceptable material by themselves — and when they should flag it to be reviewed by human moderators.
“It is critical for these tools to attain this balance, to ensure that unintended post removals are avoided — and at the same time, the workload of human moderators is substantially reduced,” he noted.
“When platforms try to reduce their dependence on human moderators in drastic ways,” Jhaver said, “they disrupt this balance and escalate moderation errors.”
Robert Bateman is a writer in Brighton, U.K.
Sources: