As Facebook has slowly come clean about its distribution of Russian-financed political ads during last year’s presidential race, the tech giant has launched a major PR salvo to show how much it loves democracy. CEO Mark Zuckerberg outlined the ways the company will be a “force for good” in a video last month. Ads in major newspapers promised “to fight any attempt to interfere with elections.” And in an interview with Axios reporter and human newsletter Mike Allen on Thursday, COO Sheryl Sandberg extolled the virtues of freedom of speech.
Sandberg’s comments, however well intentioned, raised yet more questions a day after she met with congressional leaders to discuss the Russian propaganda effort. Foremost among them: What exactly is Facebook fighting?
Sandberg defined the threat, which Zuckerberg initially shrugged off after the election, as people “posting in an inauthentic way to try to be deceptive and decisive.” That could theoretically cover a wide range of behavior. But in the context of Russian-bought ads targeted at American users, roughly 3,000 of which Facebook has since turned over to congressional investigators, she qualified the definition to focus in on bots.
As Sandberg told Allen, emphasis mine:
[The ads] are down, and the pages [that created them] are down, because they were from fake accounts….But a lot of them, if they were run by legitimate people, we would let them run. We spend a lot of time on what content can run on our platform. It’s a really important question, and it’s a difficult question.
It’s perhaps even the defining question of media in the 21st century. Sandberg’s insinuation here is that the creator of Facebook content—take a bot versus a human—is a crucial distinction for when the platform can and should attempt to police misinformation campaigns. Vladimir Putin, much as he tries to fool us, is also a human. The Russian dictator has free speech rights on Facebook by Sandberg’s logic. Should he be able post content that may happen to influence other countries’ elections?
A lot of what we allow on Facebook is people expressing themselves. The thing about free expression is that when you allow free expression, you allow free expression. That means you allow other people to say things that you don’t like and go against your core beliefs. And it’s not just content, it’s ads. When you’re thinking about political speech, ads are pretty important.
There’s a lot going on here, and it speaks to just how complex and unique a set of questions Facebook must tackle. Where are the lines between content that goes “against your core beliefs,” political influence from foreign actors, and unintentionally shared misinformation? Where are the lines between Russian government officials’ free expression, foreign propaganda, and fake news for commercial gain? Where is the line between political advertising and other political speech? Even if Facebook can play whack-a-mole fast enough to ensure phony accounts can’t mass-target ads at people in Michigan or Wisconsin, what about everything else in the big blob of content on its platform?
It’s abstract and mushy, and Facebook’s scale would seem to push it near the bounds of human comprehension. I don’t envy Sandberg and her colleagues who are attempting to wrap their heads around it all. And to give them some credit: Facebook brass are taking some baby steps to introduce elements of transparency at a company that has historically been a black box.
Beneath it all, though, is the notion that Facebook provides a platform for public discussion—even for some bad people—and its duty is only to police those bad people when their content occasionally steps over certain lines. But that doesn’t square with much outside criticism of the company, as University of North Carolina Associate Professor Zeynep Tufekci summed up succinctly on Thursday:
Tufekci is calling into question the very structure of Facebook, how it organizes a would-be public square of users interacting with other users. It raises a much deeper set of issues, including the way Facebook’s commercial need for engagement in some ways incentivizes what Sandberg described as “deceptive and divisive” information. Which is why the tech giant must continue supporting free expression at all costs, even for unsavory individuals: To acknowledge their inherent potential for harm would be to acknowledge the platform’s inherent flaws.