Editor's Note: This story contains images and language that some readers may find disturbing.
Mark Zuckerberg — one of the most insightful, adept leaders in the business world — has a problem. It's a problem he has been slow to acknowledge, even though it's become more apparent by the day.
Several current and former Facebook employees tell NPR there is a lot of internal turmoil about how the platform does and doesn't censor content that users find offensive. And outside Facebook, the public is regularly confounded by the company's decisions — around controversial posts and around fake news.
(Did Pope Francis really endorse Donald Trump? Does Hillary Clinton really have a body double?)
Behind whatever the controversy of the moment happens to be, there's a deep-seated problem. The problem is this: At age 19, the then-boy genius started a social network that was basically a tech-savvy way to check out classmates in school. Then, over the course of 12 years, he made some very strategic decisions that have morphed Facebook into the most powerful distributor on Earth — the new front page of the newsfor more than 1 billion people every day. But Zuckerberg didn't sign up to head a media company — as in, one that has to make editorial judgments.
He and his team have made a very complex set of contradictory rules — a bias toward restricted speech for regular users, and toward free speech for "news" (real or fake). And the company relies on a sprawling army of subcontractors to enforce the rules. People involved in trying to make it work say they're in wayover their heads. As one employee put it, "We started out of a college dorm. I mean, c'mon, we're Facebook. We never wanted to deal with this s***."
Subcontractors running the show
NPR got the official version of how the company censors and leaves up content this summer when Facebook's head of policy, Monika Bickert, agreed to a phone interview. We spoke with 10 current and former employees total, on the record and on background, for this investigation.
It's hard to remember this sometimes, but Facebook has never claimed to be a free-speech platform. The company is trying to create a safe space where, unlike on Twitter, people can share without being trolled or shamed. Bickert is in charge of setting the content policies. The Community Standards, which are posted online, are the rules for everyday users.
She explained that when a user reports a piece of content that might be offensive, the company exercises its power to censor with precision.
"Context is so important. It's critical when we are looking to determine whether or not something is hate speech, or a credible threat of violence," she said. "We look at how a specific person shared a specific post or word or photo to Facebook. So we're looking to see why did this particular share happen on Facebook? Why did this particular post happen?"
However, three of Bickert's former colleagues tell a very different story of how Facebook deals with controversial content. They and others declined to be named for fear of job repercussions (at Facebook or at their current employers, also Internet companies), but their descriptions are consistent with each other.
When a user flags a post on Facebook — whether it's a picture, video or text post — it goes to a little-known division called the "community operations team."
In 2010, the sources say, the team had a couple hundred workers in five countries. Facebook found it needed more hands on deck. After trying crowdsourcing solutions like CrowdFlower, the company turned to the consulting firm Accenture to put together a dedicated team of subcontractors. Sources say the team is now several thousand people, with some of the largest offices in Manila, the Philippines, and Warsaw, Poland.
Current and former employees of Facebook say that they've observed these subcontractors in action; that they are told to go fast — very fast; that they're evaluated on speed; and that on average, a worker makes a decision about a piece of flagged content once every 10 seconds.
Let's do a back-of-the-envelope calculation. Say a worker is doing an eight-hour shift, at the rate of one post per 10 seconds. That means they're clearing 2,880 posts a day per person. When NPR ran these numbers by current and former employees, they said that sounds reasonable.
A Facebook spokesperson says response times vary widely, depending on what is being reported; that the vast majority of flagged content is not removed; and that the numbers are off. Facebook did not provide alternative numbers.
If the sources — who have firsthand knowledge and spoke separately with NPR — are correct, then this may be the biggest editing — aka censorship — operation in the history of media. All the while, Facebook leaders insist they're just running a "platform," free of human judgment.
A person who worked on this area of "content management" for Facebook (as an employee, not a subcontractor) says most of the content you see falls neatly into categories that don't need deep reflection: "That's an erect penis. Check." So it's not like the workers are analyzing every single one in detail.
The problem is, simple and complex items all go into the same big pile. So, the source says, "you go on autopilot" and don't realize when "you have to use judgment, in a system that doesn't give you the time to make a real judgment."
A classic case is something like "Becky looks pregnant." It could be cyberbullying or a compliment. The subcontractor "pretty much tosses a coin," the source says.
Here's another huge barrier. Because of privacy laws and technical glitches (such as a post that is truncated to only show part of the conversation), the subcontractors typically don't get to see the full context — to which Bickert referred so often.
Frequent errors
That could be the cause of frequent errors.
NPR decided to stress-test the system by flagging nearly 200 posts that could be considered hate speech — specifically, attacks against blacks and against whites in the U.S. We found that Facebook subcontractors were not consistent and made numerous mistakes, including in instances where a user calls for violence.
We say they were mistakes because the company changed its position in dozens of instances, removing some and restoring others — either when we flagged it a second time through the automated system or brought it to the attention of Facebook headquarters in Menlo Park, Calif.
Consider this post:
One user shares a video of a police officer kicking someone on the ground, and another says they "need to start organizing and sniping these bitches." This is a call to shoot cops.
And it's occurring in a very specific context: days after Philando Castile was shot and killed by an officer (the shooting's aftermath was live-streamed on Facebook); and in a city in Minnesota that's just a few miles away from where Castile lay bleeding.
The subcontractors did not remove the post. When NPR emailed Facebook headquarters about it, a spokesperson said they made a mistake and the post should have been removed.
One source tells NPR that the subcontractor "likely" could not see the full post if just the comment was flagged, could not see the profile of the user or even view the video to which the user was responding — again for privacy or technical reasons.
Different projects within Facebook have to compete fiercely for engineering talent, and have to make the case that their to-do list is worth expensive company resources. Two sources say that over the years, it's been hard to make that case for fixing the editing system.
NPR shared many posts with the company to get specific feedback on why something stayed up or was taken down. This post, with the hashtag #blacklivesdontmatter, was left up. The spokesperson says it should have been removed, but the reviewer's perspective is not the same as a regular Facebook user; and because the company is protecting privacy, reviewers don't have access to everything, which can affect their judgments. Also, the spokesperson says, reviewers don't have the time a person at Facebook headquarters or at NPR may have to make a decision.
Another spokesperson notes that just because the reviewer has a limited view does not mean Bickert's description is incorrect — that the two versions of the process are not mutually exclusive.
A restrictive platform
NPR also finds, interestingly, that Facebook can be strict.
Think of it this way. Every media outlet has its own culture and voice. If The New York Timesis urban liberal, Fox News is conservative, and Playboy is racy, you could say Facebook aspires to be nice — a global brand that's as easy to swallow as Coca-Cola. (In this provocative talk at Harvard's Shorenstein Center, law professor and author Jeffrey Rosen says Facebook favors "civility" over "liberty.")
The bias is evident in these real-life examples:
NPR asked two newsroom veterans to put themselves in the position of a Facebook content arbiter. We don't have access to Facebook's internal guidance on hate speech, so they had to rely on their own judgment and familiarity with Facebook as users.
Lynette Clemetson, formerly with NPR and now at the University of Michigan, says of Post 1, "now that's just dumb." Post 2, she says, sounds like a "rant." Chip Mahaney, a former managing editor at a Fox television station, agrees. Both experts venture to guess that the posts are acceptable speech on the platform.
To their surprise, they're not.
When NPR flagged the posts, Facebook censors decided to take them down. Thespokesperson explains to NPR: It's OK to use racial slurs when being self-referential. A black person can say things like "my niggers." But no one can use a slur to attack an individual or group. That's prohibited. A white person cannot use the word "nigger" to mock or attack blacks. Blacks can't use "crakkker" (in whatever spelling) to offend whites.
Wiggle room in pictures
But there are so many caveats and exceptions — particularly when it comes to interpreting images and videos.
Consider this post:
This is a noose, the kind used to hang slaves. Beside it is a sign. While the letters are faded, you can make out the words: "Nigger swing set." It was shared by a user named "White Lives Matter 2."
The news veterans would take it down. Clemetson says it's not quite a call to violent action, but it's certainly a reference to past action. "It's a reference to lynching — and making a joke of lynching." Mahaney says this is "obviously a pretty difficult picture to look at" and his instinct is, unless there's a deeper context, "I would say it has no place on here."
Facebook left it up, and stands by that decision.
The spokesperson explains this historical reference doesn't have a human victim clearly depicted. If the image included a person, a specific subject of a hate crime, then it would be removed.
The spokesperson also made a claim that has not panned out: that in some situations, like this one, Facebook requires the user who created the page to add their real name to the "about" section. By removing anonymity, the hope is, people will be more thoughtful about what they post.
The spokesperson says "White Lives Matter 2" was told to identify him or herself promptly. Yet more than a month after NPR flagged the post, it hadn't happened — yet another way Facebook's enforcement mechanism is broken.
With news, a double standard emerges
If the user rules weren't nuanced enough, add to that a new plot line: CEO Zuckerberg decided to forge partnerships with news media to make his social network the most powerful distributor of news on Earth. (Facebook pays NPR and other news organizations to produce live videos for its site.)
What Facebook has found in the process is that it's much harder to censor high-profile newsmakers than it is to censor regular users. As one source says, "Whoever screams the loudest gets our attention. We react."
And that's making newsworthy content a whole other category.
Consider the scandals around "Napalm Girl" and Donald Trump. In the first, Facebook was slammed for not allowing users to share a Pulitzer Prize-winning photo because it showed child nudity. In the second, Facebook came under criticism for allowing Donald Trump to call for a ban on Muslims coming to the U.S. — what is clearly hate speech under their regular rules, according to two former employees and a current one. One source said, "he hadn't even won the Republican primary yet. What we decided mattered."
In both cases, the company caved to public pressure and decided to bend the rules. The source says bothdecisions were "highly controversial" among employees; and they signal that Facebook leadership is feeling pressure to move toward a free-speech standard for news distribution.
How to define news is a huge source of controversy — not just with the elections, but with every charged moment. After Trump won, some critics blamed fake news on Facebook for the election's outcome. (Zuckerberg dismisses that idea). After Philando Castile was shot by police, his video disappeared from Facebook. The company says it was "a glitch" and restored it. And the concern over its removal is part of an ongoing debate about what police-civilian standoffs should be live-streamed.
What now?
Some in Silicon Valley dismiss the criticisms against Facebook as schadenfreude: Just like taxi drivers don't like Uber, legacy media envies the success of the social platform and enjoys seeing its leadership on the hot seat.
A former employee is not so dismissive and says there is a cultural problem, a stubborn blindness at Facebook and other leading Internet companies like Twitter. The source says: "The hardest problems these companies face aren't technological. They are ethical, and there's not as much rigor in how it's done."
At a values level, some experts point out, Facebook has to decide if its solution is free speech (the more people post, the more the truth rises), or clear restrictions.
And technically, there's no shortage of ideas about how to fix the process.
A former employee says speech is so complex, you can't expect Facebook to arrive at the same decision each and every time; but you can expect a company that consistently ranks among the 10 most valuable on Earth, by market cap, to put more thought and resources into its censorship machine.
The source argues Facebook could afford to make content management regional — have decisions come from the same country in which a post occurs.
Speech norms are highly regional. When Facebook first opened its offices in Hyderabad, India, a former employee says, the guidance the reviewers got was to remove sexual content. In a test run, they ended up removing French kissing. Senior management was blown away. The Indian reviewers were doing something Facebook did not expect but which makes perfect sense for local norms.
Harvard business professor Ben Edelman says Facebook could invest engineering resources into categorizing the posts. "It makes no sense at all," he says, that when a piece of content is flagged, it goes into one long line. The company could have the algorithm track what flagged content is getting the most circulation and move that up in the queue, he suggests.
Zuckerberg finds himself at the helm of a company that started as a tech company — run by algorithms, free of human judgment, the mythology went. And now he's just so clearly the CEO of a media company — replete with highly complex rules (What is hate speech anyway?); with double standards (If it's "news" it stays, if it's a rant it goes); and with an enforcement mechanism that is set up to fail.
Gabriela Mejias and Justina Vasquez contributed reporting to this story.
Copyright 2021 NPR. To see more, visit https://www.npr.org.