Facebook's massive, secret rulebook for policing speech reveals inconsistencies, gaps and biases
{{#rendered}} {{/rendered}}
Facebook is attempting to tackle misinformation and hate that its platform has enabled with a massive, byzantine and secret document of rules packed with spreadsheets and power point slides that gets updated regularly for its global content moderators.
According to the blockbuster New York Times report, the rules show the social network to be "a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself." The Times discovered a range of gaps, biases and outright errors — including instances where Facebook allowed extremism to spread in some counties while censoring mainstream speech in others.
The rulebook's details were revealed Thursday night thanks to a Facebook employee who leaked over 1,400 pages of the speech policing rulebook to the Times because he "feared that the company was exercising too much power, with too little oversight — and making too many mistakes."
{{#rendered}} {{/rendered}}
CLICK HERE TO GET THE FOX NEWS APP
CEO Mark Zuckerberg's company is trying to monitor billions of posts per day in over 100 languages while parsing out the subtle nuances and complicated context of language, images and even emojis. The group of Facebook employees who meet every other Tuesday to update the rules, according to the Times, are trying to boil down highly complex issues into strict yes-or-no rules.
The Menlo Park, Calif. company then outsources the content moderation to other companies that tend to hire unskilled workers, according to the newspaper's report. The 7,500-plus moderators "have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to “jihad,” for example, forbidden? When is a “crying laughter” emoji a warning sign?"
{{#rendered}} {{/rendered}}
Some moderators vented their frustration to the Times, lamenting that posts left up could lead to violence. “You feel like you killed someone by not acting,” one said, speaking anonymously because he had signed a nondisclosure agreement. Moderators also revealed that they face pressure to review a thousand pieces of content per day, with only eight to 10 seconds to judge each post.
The Times probe published a wide range of slides from the rulebook, ranging from easily understood to somewhat head-scratching, and detailed numerous instances where Facebook's speech rules simply failed. Its guidelines for the Balkans appear "dangerously out of date," an expert on that region told the paper. A legal scholar in India found "troubling mistakes" in Facebook's guidelines that pertain to his country.
FACEBOOK GAVE TECH COMPANIES 'INTRUSIVE' ACCESS TO PEOPLES' PRIVATE MESSAGES AND PERSONAL DATA
{{#rendered}} {{/rendered}}
In the U.S., Facebook has banned the Proud Boys, a far-right group that has been accused of fomenting real-world violence. It also blocked an advertisement about the caravan of Central American migrants put out by President Trump's political team.
“It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform,” Sara Su, a senior engineer on the News Feed, told the Times. “When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.”
Monika Bickert, Facebook’s head of global policy management, said that the primary goal was to prevent harm, and that to a great extent, the company had been successful. But perfection, she said, is not possible.
{{#rendered}} {{/rendered}}
“We have billions of posts every day, we’re identifying more and more potential violations using our technical systems,” Bickert told the newspaper. “At that scale, even if you’re 99 percent accurate, you’re going to have a lot of mistakes.”
ELON MUSK SAYS 'PEDO' INSULT OF THAI CAVE RESCUER IS FIRST AMENDMENT-PROTECTED SPEECH
Facebook’s most politically consequential and potentially divisive document could be an Excel spreadsheet that the Times reports lists every group and individual the company has barred as a "hate figure." Moderators are told to remove any post praising, supporting or representing any of the people on that list.
{{#rendered}} {{/rendered}}
Anton Shekhovtsov, an expert in far-right groups, told the publication he was “confused about the methodology.” The company bans an impressive array of American and British groups, he added, but relatively few in countries where the far right can be more violent, particularly Russia or Ukraine.
Still, there's inconsistency in how Facebook applies the rules. In Germany, where speech in general is more scrutinized, Facebook reportedly blocks dozens of far-right groups. In nearby Austria, it only blocks one.
For a tech company to draw these lines is “extremely problematic,” Jonas Kaiser, a Harvard University expert on online extremism, told the Times. “It puts social networks in the position to make judgment calls that are traditionally the job of the courts.”
{{#rendered}} {{/rendered}}
HOW SECURE IS YOUR MESSAGING APP?
Regarding how Facebook identifies hate speech, the Times reports:
The guidelines for identifying hate speech, a problem that has bedeviled Facebook, run to 200 jargon-filled, head-spinning pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanizing comparisons,” among them comparing Jews to rats.
{{#rendered}} {{/rendered}}
In June, internal emails reportedly showed that moderators were told to allow users to praise the Taliban, which is normally forbidden, if they mentioned the group's decision to enter into a cease-fire. In a separate email obtained by the newspaper, moderators were told to search for and remove rumors that wrongly accused an Israeli soldier of killing a Palestinian medic.
The Times investigation concludes by saying that a big hurdle to cracking down on inflammatory speech is Facebook itself, which relies on an algorithm to drive its growth that notoriously promotes "the most provocative content, sometimes of the sort the company says it wants to suppress."
“A lot of this would be a lot easier if there were authoritative third parties that had the answer,” Brian Fishman, a counterterrorism expert who works with Facebook, told the Times.
{{#rendered}} {{/rendered}}
Fishman explained: “One of the reasons why it’s hard to talk about, is because there is a lack of societal agreement on where this sort of authority should lie.”