Among the measures Facebook brought in to respond to Russian attempts to influence the 2016 election on the platform was increased transparency around political ads bought on the platform, including making all political ads available in an online archive. Earlier this year, the company began requiring all political ads to display information about who paid for the ad—information not required on such ads by U.S. law, thanks in part to Facebook itself and Hillary Clinton campaign lawyer Marc Elias, who helped them evade FEC regulations.
But the rollout of these new rules has been rocky at best, and Vice News uncovered another flaw in the system in a story published Thursday. Their reporter, William Turton, formerly of GMG, managed to get Facebook to accept ads with “completely made up” information about who was paying for them. One ad, copied from a Russian-sponsored 2016 ad, was submitted for a page titled Ninja Turtles PAC and listed its sponsor as Mike Pence. Another ad, for Ratatouille for Senate, said it was paid for by the Islamic State. (None of the ads actually ran, but Facebook accepted them, so Vice News could have run the ads if they wanted to, and I wish they had.)
In July, Bloomberg reported that the platform was marking entirely non-political ads as political because they contained certain political keywords—like an ad for a discount on Bush’s beans at a Walmart in Texas, which was marked as political because it contained the word “Bush.” Similarly, ads including the word “Clinton” which had nothing to do with the former president or his family were also blocked. And earlier this month, the New York Times reported the existence of ads that were listed as being paid for by “a freedom loving American Citizen exercising my natural law right, protected by the 1st Amendment and protected by the 2nd Amendment.” Doesn’t exactly roll off the tongue—or provide any information about who’s paying.
Earlier this year, we reported on the existence of a Twitter ads feature, “Ads Without Profiles,” which allows advertisers to run ads connected to non-existent, fake profiles. Several ads we discovered were run by made-up groups like the Middle America Project, which does not exist. Twitter refused to answer any of our questions about how they vet the information submitted by advertisers seeking to use this feature. As far as we know, a Russian influence campaign could buy an ads-without-profiles Twitter ad, say it’s from the United States Policy Program or something, and the user would never know.
The potential for disinformation and fakery on online platforms is vast, as basically everything about the online media environment since 2016 and before has demonstrated. Facebook and Twitter have been under enormous pressure to catch their transparency requirements up to the very bare minimum that they ought to be doing, after dragging their feet on, or outright fighting against, doing so previously. So perhaps it’s to be expected that their platforms aren’t foolproof yet—the company told Vice News that enforcement “isn’t perfect”—but man, that’s a pretty gaping loophole.
This is a problem with these massive online platforms in general. From YouTube grappling with its algorithms marking LGBTQ content as explicit and placing big companies’ ads next to racist content to Twitter suspending people after mass reporting by trolls, these huge platforms rely on mostly automated systems to work (and sell as many ads as they do). They are inevitably more easily gamed than those that are small enough to be effectively human-moderated.
But in addition to the practical ease of doing something like submitting a fake Facebook ad, there’s also a lack of enforcement power. After all, this doesn’t happen a lot with political ads on the TV or radio. You don’t get a lot of super PACs running ads and claiming they were by someone else, because there are FEC rules requiring ads to disclose who sponsored it, and you could get in trouble for not doing that or for lying about who it was (though the FEC generally sucks at enforcement). The FEC has begun writing new rules that would cover online ads, but they won’t be ready in time for the 2018 midterms. The fact that Facebook is unable to prevent such blatant misuse of their platform—or, indeed, to set it up in such a way that people advertising Bush’s beans don’t get wrongly penalized—is evidence that you simply cannot expect companies to police things like this themselves. Facebook couldn’t fine William Turton for running an ad pretending to be ISIS. Only the government can.