A woman sued Twitter for supporting terrorism. Why the lawsuit is nonsense.

This image was removed due to legal reasons.

Fighting the ISIS social media machine is a never-ending game of Whack-A-Mole. Twitter has banned more than 80,000 accounts associated with ISIS over the last year, as tracked by terrorism hacktivist collective CtrlSec. Often, suspended users pop up again soon after, a new number affixed to the end of their Twitter handle.


Twitter has long been the target of criticism for allowing terrorist propaganda on its platform. Initially, it was faulted for failing to suspend terrorist accounts, and then once it started doing that, for failing to do so aggressively enough. Now it's even facing a dubious lawsuit over its treatment of terrorism. A Florida woman whose husband was killed in a terrorist attack while working as a government contractor in Jordan says in a civil lawsuit filed this week that Twitter's role in ISIS' social media strategy amounts to a violation of the Anti-Terrorism Act.

Twitter will likely be able to get the lawsuit dismissed. Section 230 of the Communications Decency Act protects platforms like Twitter from liability for what kind of content users post to their platforms. In the eyes of the law, holding Twitter accountable for what shows up on the network is a little like holding the postal service accountable for what people send in the mail.

"Imagine if after a game of Whack-A-Mole you got sued for giving material aid to the moles," said Ryan Calo, a law professor at University of Washington.

But just what to do about ISIS has been a tricky question for Twitter. For years, the company bucked at the idea of policing how people used the platform. The lawsuit cites an interview with Biz Stone in 2014, in which the Twitter co-founder responded to a question about ISIS on Twitter by saying "if you want to create a platform that allows for the freedom of expression for hundreds of millions of people around the world, you really have to take the good with the bad.”

Since then, the company's zealous free speech-stance has softened, with a new ban on revenge porn and an expanded definition of the "violent threats" prohibited on the platform.

“While we believe the lawsuit is without merit, we are deeply saddened to hear of this family's terrible loss. Like people around the world, we are horrified by the atrocities perpetrated by extremist groups and their ripple effects on the Internet," a Twitter spokesperson said in an e-mailed statement. "Violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear."


Twitter, the spokesperson said, has "teams around the world actively investigating reports of rule violations, identifying violating conduct, partnering with organizations countering extremist content online, and working with law enforcement entities when appropriate.”

Lawmakers and government officials still haven't been satisfied by the response of Silicon Valley companies like Twitter to terrorism on social networks. A criticism of Twitter in the lawsuit and elsewhere, for example, is that it still mainly relies on others to flag terrorist accounts before taking them down. At a recent summit on terrorism attended by bigwigs from both the White House and Silicon Valley, government officials suggested that the heads of tech companies like Twitter can do more to proactively combat terrorism, perhaps by creating some kind of technological system that could detect, measure, and flag “radicalization.”


But there is an inherent problem in insisting that social networks, either legally or morally, are responsible for policing terrorism. Even if Twitter could build such a tool (and it's really not clear that it could) and reliably whack every single terrorist mole that pops up on the network, it's asking Twitter to decide who's a terrorist.

What can Twitter do? It can collaborate with groups like CtrlSec and the government to more quickly identify and shut down accounts. And it can establish still better rules to make clear the kind of behavior that is and isn't allowed on the network. But giving Twitter engineers the responsibility to decide who's a terrorist is putting power where it doesn't belong.