How Twitter treated death threats against me

Latest

Over the past year, Twitter has taken many steps to let its users know that it takes harassment very seriously: it has banned revenge porn, issued new anti-harassment rules, established a trust and safety council and de-verified high-profile users that it considers abusive.

“We suck at dealing with abuse and trolls,” former CEO Dick Costolo declared last year. In the time since, Twitter has made it very clear that it wants that to change.

But how effectively are Twitter’s harassment policies actually enforced?

Twitter’s abusive behavior policy is vague, so I wanted to conduct a sort of litmus test to find out what kinds of tweets Twitter would actually act on. After a story I wrote earlier this month about my abortion wound the Twitter mobs up, I decided to report a dozen or so of the worst tweets to Twitter to see how the company might respond.

Among the tweets I reported were those suggesting I deserved to die, those that called me a “whore” and a “sick bitch” and tweets that proposed tracking down my location to enact some sort of sordid revenge. Of all of them, Twitter required that a user delete just one tweet that seemed no worse than the others. In response to the rest, Twitter sent me a form email to let me know that it “could not determine a clear violation of the Twitter Rules.”

It did not take action on this tweet:

Or these:

Or even these, which seemed most in conflict with the Twitter’s abusive behavior policy:

Some of the tweets I had submitted certainly seemed to exist in the gray area of what Twitter considers abusive—they were mean, sure, but they weren’t violent and perhaps did not meet Twitter’s definition of “harassment” because the tweets weren’t especially threatening or repeated.

But others, like those from the tweeters that suggested I needed to “go” and that it was “op time,” seemed like clear violations: they encouraged others to target me, repeatedly tweeted at and about me and made references to violence.

Overall, my admittedly small sample size revealed virtually no consistency in what kinds of behavior Twitter will admonish.

When I reached out to a Twitter spokesperson for an explanation for why each tweet did or didn’t meet Twitter’s criterion for abuse, I suddenly received an e-mail that the two accounts I suspected had clearly violated Twitter’s policy had been suspended. All a spokesperson would tell me, however, was, “We do not comment on individual accounts, for privacy and security reasons.” Twitter stood by the statement even after I explained that it was more than a little weird to protect my own account’s privacy from myself.

To be sure, Twitter has changed a lot since the days when it referred to itself as “the free speech wing of the free speech party. A change in policy, though, does not always add up to a change in action. Twitter still hasn’t figured out where to draw the line when considering what kinds of non-violent speech constitutes harassment. But it also does not have mechanisms in place to ensure that behavior that’s clearly in violation of its policies gets dealt with.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin