This is Twitter's biggest step toward stopping abuse yet

Latest

Like a virus that mutates more quickly than vaccines to stop it, Twitter’s problems with harassment are continually evolving—one week, unassuming punctuation marks become viral racial slurs, the next the mundane names of Silicon Valley companies take on dark new meaning.

But a new feature Twitter is reportedly working on may go a long way toward fighting this ever-evolving abuse. On Sunday, as reported by The Next Web, users spotted a new feature that allows individual users to mute specific hashtags and phrases from appearing in their feeds.

The feature though appears to have been accidentally released too early—it’s no longer available. (It appeared after Twitter announced last week during the company’s quarterly earnings report that it is developing a suite of new safety tools and policies.) We hope to see it return in the near future.

This new feature is similar to one Instagram released earlier this year. It is an excellent strategy that would allow Twitter to, in a way, curb abuse before it actually happens.

Balancing freedom of expression and ensuring that users feel safe on Twitter is a delicate mission. But allowing users to decide for themselves what kind of commentary is taking it too far ensures that when one user expresses a view that may be hurtful to another, the other user doesn’t have to suffer the effects.

Over the past two years, Twitter has taken many steps to let its users know that it takes harassment seriously: It has banned revenge porn, issued new anti-harassment rules, established a trust and safety council, and de-verified high-profile users that it considers abusive. But those efforts seem to have not worked very well. To wit: A report from the Anti-Defamation League earlier this month found that of the 1,600 Twitter accounts that have sent out thousands of anti-Semitic tweets targeted at journalists over the past year, Twitter suspended just 21% of them.

Rather than moderating abuse itself, Twitter seems to be trending toward allowing individual users to set those rules for themselves. In August, Twitter introduced a quality filter, along with a setting to allow users to limit who they receive notifications from. Increasing the degree of control users have in their notification settings will surely serve to curb abuse further. Allowing someone to block accounts with no avatar, for example, might go a long way toward eradicating users who have made new accounts just to troll.

The risk, of course, is that users turn Twitter into a self-censored bubble. But any solution to harassment is likely to have some tradeoffs. This one both allows Twitter to avoid making strict, blanket rules for what kind of speech is and is not acceptable and allows harassment targeting strategies to adapt at internet speed. Like Instagram’s new policy, it would be a move toward a more balanced internet.


On Nov. 15, we’ll be discussing strategies like this that actually might help make the internet a nicer place at the Real Future Fair in Oakland. Come join us to take part in the conversation.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin