Instagram's new moderation policy is exactly how we should handle abuse on the internet

Latest

Taylor Swift’s Instagram account was a snake pit. An emoji one, anyway.

In July, an internet feud erupted over how much Swift had helped her ex, Calvin Harris, in writing his latest single with Rihanna. Harris slammed Swift and “her people” for trying to “tear” him down. Swift haters responded by spamming her Instagram with many, many snake emojis. That is, until Swift employed a new Instagram moderation tool that allowed her to filter them out.

The feature, which was already available to high-volume accounts of celebrities and businesses, was rolled out to all of Instagram’s 500 million users this week. It allows you to filter out words you find inappropriate, without having to rely on Instagram’s human moderators to weigh in. Users can create their own blacklist of troublesome terms, or select a default list of problem words Instagram has identified. If someone uses one of the terms from that list, their comment will be blocked from appearing for anyone but the person who wrote it. And if you are Taylor Swift, that list probably includes .

Social media companies have long struggled with how to balance the dueling tensions of protecting users and allowing free speech on the internet to flourish. Over the past two years, as online abuse has grabbed headlines, most networks have introduced new rules and tactics to tamp down on behavior that is particularly toxic. Twitter, for example, has backed down from its aggressive free speech stance to ban revenge porn, issue stronger anti-harassment rules and even suspend misbehaving users.

Still, most social media companies’ abuse-fighting rules are still lagging behind, and where they do exist, they can be difficult to enforce, if they are enforced at all. It’s a complicated problem that requires striking a delicate balance. There may be no perfect solution.

But allowing users to create a blacklist of phrases enables Instagram to stop abuse before it actually happens, recognizing that not all comment moderators have the time or necessary knowledge to give every comment its due. Giving users options about what kind of content they see allows them to protect themselves without Instagram making overly zealous blanket rules that apply to everyone.

Fusion contributor Caroline Sinders has suggested that Twitter could mitigate abuse by giving users options for who can tweet at them. Allowing someone to block accounts with no avatar, for example, could help eradicate users who have made new accounts just to troll.

Instagram’s new strategy, of course, is not with some tradeoffs. A user has the option of blocking anything they want, which means they are free to create whatever kind of heavily censored online bubble they see fit. And determined trolls will always come up with a workaround, just like the Swift haters who got around the filter by simply writing out “Ssnnaakkee.” (Instagram does use a shadowban here in an attempt to discourage that, which means that while the comment doesn’t show up for anyone else, it does appear for the person posting it. This is important since knowing they’re blocked often makes trolls even angrier.)

Instagram’s new policy, though, is movement towards an internet that is more balanced. For Instagram users, there is now an option for dealing with abuse besides simply averting your eyes.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin