Facebook declares war on misinformation

Latest

The old refrain is that a lie spreads halfway around the world before the truth puts its pants on. Facebook declared this week that it wants to help the truth get to the party earlier.

In a blog post Tuesday, Facebook announced that it would start labeling suspected hoaxes and fake news with a warning, as well as reduce the appearance of posts with misinformation in the Newsfeed. You will start seeing this label on Facebook posts with news that is too good (or too horrible) to be true:

Facebook’s decision to help stop the spread of lies started with a study  called “Rumor Cascades” conducted in 2013 by its Data Science Team. (That’s the same team that brought us the infamous “emotion contagion study.” ) In the summer of 2013, the Facebook researchers looked at users who had posted false information on the site thinking it was true. They were able to identify fake stuff en masse by pulling in posts where the users’ friends had put links in the comments to rumor-debunking site Snopes.com. Two examples the researchers mention of fake news spreading on the site like wildfire that summer were a photo that claimed to be Trayvon Martin at 17 (it wasn’t) and a receipt suggesting that Obamacare would tax non-medical items like clothes and rifles (it was a bug in the sporting good store’s software). But the researchers also saw very old rumors resurface on the site such as a photo of a bicycle stuck in the trunk of a tree supposedly left behind by a soldier during World War I that was supposed to represent the cost of war (the bike really was “eaten” by the tree but it was actually just left behind by a forgetful dude in the 50s).

The researchers found that the outrageous stuff traveled farther and faster than debunking of that outrageous stuff. More hearteningly, they found that once a user realized a post had bad information — thanks to a Snopes link from a helpful friend — they were 4.4 times more likely to delete it.

Now Facebook is turning that research into practice. The site is crowdsourcing the truth. It’ll identify bad posts in two ways, by reports from users flagging a link as a fake news story but also by taking “into account when many people choose to delete posts.” The finger has long been pointed at the Internet for facilitating the spread of hoaxes and lies. As a huge platform for the social distribution of news and information, Facebook’s move could significantly deter the spread of false information. The label it’s adding to posts is something people have long called for on Twitter, so that tweets containing false information would be flagged to help stop the spread of untruths. Twitter has not yet added such a feature.

At the end of the day, Facebook’s system still relies on human beings to realize information is false and to delete stories they learn are wrong. So it’s only as good as Facebook’s users are at detecting BS. Plus, it only applies to links to outside sites. You can’t use this to report your conservative uncle for a political diatribe with incorrect facts unless he links to an outside source.

Not having Facebook users spread misinformation about a young boy who’s been murdered seems like an obvious good, but critics immediately voiced concern that Facebook’s move would mean they would no longer see satirical stories from The Onion on the site. Facebook says not to worry.

“We’ve found from testing that people tend not to report satirical content intended to be humorous, or content that is clearly labeled as satire,” write engineer Erich Owens and research scientist Udi Weinsberg in a Facebook post. “This type of content should not be affected by this update.”

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin