With nearly 2 billion users, Facebook reaches nearly a quarter of the people on the planet. And while its broadcasting power can be used for promoting good causes and unleashing viral cat videos, it can also be used to distribute hateful and violent content. This has put Facebook in the uncomfortable position of making judgment calls about whether the millions of posts flagged by its users as objectionable each week should be allowed to stay, flagged to other users as disturbing, or removed completely. It’s an unprecedented responsibility at this scale.
“The range of issues is broad–from bullying and hate speech to terrorism and war crimes–and complex,” Monika Bickert, Facebook’s head of global policy management, recently wrote in an op-ed. To meet this challenge, she said, “our approach is to try to set policies that keep people safe and enable them to share freely.”
Once Facebook sets these rules, it relies on 4,000 human content moderators to apply them to individual flagged posts.
The job isn’t straightforward. According to a Guardian report based on thousands of pages of Facebook’s content moderator training materials, “Someone shoot Trump” should be permitted, but not the phrase “Let’s beat up fat kids.” Digitally created art showing sexual activity should be removed, but all handmade erotic art is fine. Videos showing abortions are also permitted—as long as they don’t feature nudity.
Guidelines like these illustrate the complexity of content regulation, which until social media came around, involved questions that, for the most part, only governments faced at scale. What constitutes dangerous speech? Should some people–such as the president–be treated differently when they make criticisms or threats, or hate speech (paywall)? When is it in the public interest to show obscenity or violence? Should nudity be permitted, and in what contexts?
Some of Facebook’s answers to these difficult questions mimic content…