Facebook has spent the past 18 months hustling to show that it’s taking the spread of political misinformation on its platform seriously. It’s partnering with professional fact-checkers, throwing pennies at local news initiatives, and making the PR rounds to profess its love for journalism. But its recent handling of a clearly satirical blog post shows a basic structural problem with how the tech giant is attempting to police fake news.
On Thursday night, Adam Ford, editor and publisher of the Christian satire site Babylon Bee, tweeted that Facebook had alerted him that one of its posts had been flagged for “disputed” information. The message cited a “fact check” of the article by Snopes, one of several organizations Facebook partnered with in late 2016 in a well-publicized effort to clean up its platform:
Any human reading the headline of the Bee article—which neatly sums up what the satirical piece conveys—will recognize it as a joke. The post even includes an image of a CNN-emblazoned washing machine. Ford’s tweet quickly made the rounds on conservative media as yet more evidence of Facebook’s policies being tilted against conservative viewpoints. Snopes was portrayed as a lefty collaborator.
But any human who read Snopes’ “fact check” honestly could also see that it was in on the joke. The article was a grand total of 134 words—excluding quotes from the Babylon Bee—and fell under the site’s “Humor” tag.
“Although it should have been obvious that The Babylon Bee piece was just a spoof of the ongoing political brouhaha over alleged news media ‘bias’ and ‘fake news,’ some readers missed that aspect of the article and interpreted it literally,” Snopes founder David Mikkelson wrote in the piece.
So why run the check in the first place? “Our standard has always been that we tackle whatever people are asking about or questioning at the moment; we don’t make any value judgments about what’s too silly or obvious or unimportant to cover,” Mikkelson wrote in an email to Splinter, adding that Snopes has frequently fact-checked posts by The Onion and Clickhole. “We’ve seen countless examples of material that some portion of the audience identifies as seemingly ‘obvious’ parody or satire or leg-pulling, while many others accept (or at least are unsure of) its literal truthfulness.”
Fair enough. But Facebook’s use of such fact checks in how it decides which publishers will, in its own words, “see their distribution reduced and their ability to monetize and [advertise] removed” would seem to give Snopes’ editorial choices real-life financial implications. Mikkelson said that he is unclear about how the tech giant makes such decisions, even though it cites Snopes in doing so:
We don’t have any control over what articles Facebook flags for their audience, or what measures they choose to implement, in response to fact checks. Facebook doesn’t provide us with a means of tagging material as ‘satire’ for their purposes, and even if they did its implementation would be problematic. Virtually every fake news site claims to be “satirical” in nature, even though most of them are simply engaging in clickbaiting and political trolling that is devoid of any qualities of genuine satire, and we aren’t — and can’t be — the Facebook arbiters of what is or is not “real” satire based upon our presumptions of the creator’s intent. It’s up to Facebook to decide for themselves what sites or articles they want to exclude from their flagging/penalty system on that basis.
Here lies an important disconnect between Facebook and its fact checking partners: They’re not actually partners! Differentiating between satire and fake news is hard enough at times for readers, let alone a massive platform governed by algorithms. One would think communication between fact checkers and Facebook’s enforcement mechanisms would be key. But that might require—gasp!—additional human intervention in the process to go between the two parties.
While that’s no small ask of a platform of Facebook’s size, the company raked in more than $4 billion in profits last quarter alone. It can find a way. For what it’s worth, the tech giant quickly apologized for its mislabeling of The Babylon Bee piece after Ford and others spoke up.
“There’s a difference between false news and satire,” a Facebook rep told Splinter in an email. “This was a mistake and should not have been rated false in our system. It’s since been corrected and won’t count against the domain in any way.”
Call it a fake news false positive, many more of which should be expected as Facebook attempts to clean up its platform. Maybe that’s a price worth paying for tech giants moderating political information, particularly at a time when “satire” is used as a defense against any criticism of bad faith. But it’s also worth asking how these decisions get made—and recognizing who could pay the price when Facebook misfires.
“It’s scary because Facebook is a huge portion of [The Babylon Bee’s] traffic,” Ford said via Twitter DM on Friday. “I decided to make a bunch of noise about this, and they ended up admitting to a ‘mistake,’ and we get to live another day. But if we were unwilling or unable to make a fuss about it, we would’ve just been out of luck.”