The Guardian has revealed the secret and guarded guidelines set for Facebook moderators for monitoring violence, hate speech, revenge porn and more on the social media platform.
The guidelines show the extremely blurred and imperfect line between what's considered dangerous material and acceptable content on the 2 billion user strong social network.
Documents obtained by The Guardian explore the murky waters that moderators and executives must wade through on a daily basis as they judge content as offensive and dangerous or acceptable. A Facebook user who exclaims "To snap a bitch's neck, make sure to apply all your pressure to the middle of her throat." gets a pass, but a commenter who says "Someone shoot Trump" is taken seriously.
The investigation arrives at a crucial point in time for the social media giant. Facebook faces public scrutiny after live videos of sexual assaults and murders have been broadcasted on their Facebook Live platform.
Facebook values the privacy of their users and their freedom of expression, and wants to achieve balance between eliminating offensive content and suppressing the users’ freedom.
More than 100 internal training manuals, spreadsheets, and flowcharts form the framework for how Facebook moderates issues such as violence, hate speech, terrorism, racism, revenge porn, and self-harm.
Facebook's manual on credible threats of violence https://t.co/UPYIUrmVzw— The Guardian (@guardian) May 21, 2017
Many moderators have said they find the policies inconsistent and confusing. For example, not all rape threats are treated equally when they perhaps should be.
"Facebook cannot keep control of its content," says an anonymous moderator.
"It has grown too big, too quickly."
Documents supplied to Facebook moderators within the last year included lists of comments that are considered unacceptable, and alternatively comments that are totally fine. Here are some examples of ‘credible violence.’
As a head of state, President Donald Trump is in a protected category, so threats against Trump (empty or otherwise) are forbidden. Yet instructions on literally breaking someone's neck aren't considered a credible threat, apparently because they only count as hypothetical misogyny and abuse. Statements such as the following are considered acceptable:
"Little girl needs to keep to herself before daddy breaks her face"
"You assholes better pray to God that I keep my mind intact because if I lose I will literally kill HUNDREDS of you."
Although alarming, these statements are permitted as they are “aspirational or conditional” and as a result not credible. Not too sure about that personally.
According to another part of the document, Facebook's policies on animal abuse allow certain photos and videos for "awareness.” Facebook is still allowed to flag content as "extremely disturbing" to warn users before they take a look.
"Generally, imagery of animal abuse can be shared on the site," one slide says.
"Some extremely disturbing imagery may be marked as disturbing."
With 2 billion users, it’s inevitable that this kind of content is going to surface. Whether it’s internet trolls, young teens attempting to be edgy, or actual credible threats, it’s not easy to moderate. That’s not to say that some of these guidelines seem rather lax and hypocritical - some statements need to be taken more seriously.