Advertisement

Folta’s supporters made memes, too, with sayings like “Keep Calm and Stand With Folta” and his face fused with the revolutionary Che Guevara. After science journalist Brooke Borel wrote a story in Buzzfeed that examined the ethics of Folta’s practices and conflicts of interest, she received throngs of angry messages from his supporters who felt her story was a strike against science.

The harassment also made its way into the real world: the university was so inundated with requests to fire Folta that it changed his office number and asked the FBI's Domestic Terrorism Task Force to remain on alert.

Advertisement

After a few weeks, Folta and his university decided that the trolls had won. Folta announced via Facebook that he was stepping out of the public conversation.

Advertisement

“Professor Folta is an example that should worry all scientists, particularly those working on politically or ideologically charged issues,” wrote a scientist who blogs under the pen-name Orac and was a leading supporter of Folta. “What worries me is that this increasing use of harassing techniques will subject more and more scientists to pressure, and many of them will bow out of public discourse. It’s hard not to worry that the bullies could win.”

Advertisement

Orac's concerns echoed those from a 2015 report from the Union of Concerned Scientists warning that harassment was in danger of compromising the basic ability of scientists to do science.

“The amplification in social media just made it awful,” Folta told me. “They don’t want to discuss the science. They want to destroy your career.”

Advertisement

👿  👿  👿

The ability to harass is practically engineered into the architecture of our online hang-outs. On Reddit, for example, the fact that you can vote content down allows unwanted voices to be silenced. On Twitter, hordes of harassers can easily bear down on a single target at once. The comments section of a news article or YouTube video can easily transform into a forum for sexual harassment. It’s not just that it’s easier to ruin someone’s life when you think of them as a Twitter avatar. It’s that ruining their life takes dangerously little effort.

Advertisement

“You can instigate a mob on your coffee break and then go right back to work,” said Katherine Cross, a sociologist who studies online harassment at the City University of New York. “The structure of a social media website can do a lot of the work for you.”

In 2014, a Pew Research Center survey found that four in 10 people had experienced online harassment, the majority of it via social media. Never has it been so easy to spew vitriol and hate.

Advertisement

If online harassment is a problem built into the architecture of the digital world, then it stands to reason that the digital world is in need of a renovation. In the built world, environments are often engineered to influence how they are used. On a crime-plagued street, urban planners might install street lights to discourage mischievous activity. The internet needs a digital equivalent.

One solution often offered to combat online harassment is the idea of "safe" spaces. The internet, the argument goes, is only a place for everyone if everyone actually feels welcome on it.

Advertisement

Many people I interviewed were aggressively opposed to this idea. Perry Jones, the president of the Open Gaming Society, a group widely supported by Gamergaters, told me that freedom of speech should trump discomfort. (Though he does think that it should be easier to block things that you don’t want to see.)

But even those with a less libertarian view of online speech agree that the idea of a “safe” space online is an illusion.

Advertisement

“I prefer the language of safer spaces,” Cross, the sociologist, said. “Places where the ugliest manifestations of prejudice and abuse are far less likely to occur.”

Streetlights, after all, can't actually stop a crime from occurring. They can just make the idea of committing one a little bit more unappealing.

Advertisement

Cross thinks people don’t have a right to be free from discomfort, dissent or conflict. They are only entitled to safety from outright abuse, like rape threats, death threats, slurs and libel. “It's simply a matter of treating those you interact with as human beings,” she said.

Over the past two years, as cyberbullying has become a buzzword, companies have responded by decrying online harassment and introducing tools designed to fight back against it, like the opportunity to block or filter people on Twitter.

Advertisement

And social media companies have been reconsidering their rules of engagement in efforts to deter such behavior in the first place. Twitter, which once called itself “the free speech wing of the free speech party,” has banned revenge porn, issued new anti-harassment rules, and instituted a policy for people to request the removal of content related to dead family members. This month, it announced the creation of a Trust and Safety Council to come up with even more policies. Even Reddit, the internet’s original cesspool of hate, last year announced it would ban some disturbing subreddits and make others harder to access.

But those policies and tools have not gone far enough. They have largely been piecemeal responses to individual problems, like the beheaded victims of ISIS or girlfriends sexually harassed by ex-boyfriends. There is much less effort to reexamine harassment as a holistic issue, to rethink the architecture that encourages its occurrence.

Advertisement

In an op-ed last year, Google's Eric Schmidt suggested the creation of tools designed to "spellcheck" online hate and harassment.

Advertisement

"Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies," Schmidt wrote in The New York Times, "and the empowerment of the wrong people, and the wrong voices."

Schmidt viewed online harassment not as a problem neatly confined to one demographic, but as the result of the basic fallacy of an 'open' web: even if you allow everyone to say whatever they want, some people will always be silenced. Laws punishing online harassment are lacking, but so too are tools to discourage it in the first place. Schmidt argued that such a tool was necessary for the creation of a fair, just internet.

Advertisement

Until that happens, the trolled are fighting fire with fire. One strategy growing in popularity is trolling back. Last year a German art collective built a Twitter bot that targeted misogynist trolls by flooding them with patronizing messages when they used sexist words, such as #feministsareugly or “die stupid bitch."

Advertisement

“We wanted to flip the script on the trolls, and offer a smart and funny feminist response,” said Ada Stolz, a pseudonymous member of the Peng! Collective, which built the bot. The bot offered links to instruction videos on how not to be a horrible, misogynistic troll.

The campaign only ran for a week last year, but it inspired copycat bots among other victims of harassment, such as one pro-vaccine mom I spoke with who had been targeted by anti-vaxxers. Even if the bot didn't discourage all of her attackers, she told me that it gave her a sense of power and encouraged her to keep up her own work—in that sense, at least, it was a success.

Advertisement

Others have built custom tools for shielding themselves from harassers, like female game engineer Randi Harper's “Gamergate autoblocker” which automatically blocks any Twitter user who follows two or more so-called figureheads of the Gamergate movement.

Next month, the battle over online harassment will play out in real life at South by Southwest Interactive, one of the world's most well-attended technology conferences.

Advertisement

Last fall, South by Southwest announced two panels dealing with gaming and online harassment. The Gamergate crowd got wind of the panels, and used a tactic used before: threats of violence if people were given an IRL platform to air their views. After SXSW received violent threats, it canceled the panels. But then after a torrent of angry tweets and pledges by major media companies to withdraw from the conference, SXSW reinstated both panels and apologized for mistakenly letting the internet’s hordes of anonymous bullies win.

Advertisement

Now SXSW is dedicating a whole day to an "Online Harassment Summit" in March. Due to the violent threats, festival director Hugh Forest told me via email that they have implemented a "very strong" safety plan.

The presenters told me they want to make sure online harassment is not framed as a "women's problem" or a "teen problem" or an issue plaguing the gaming industry, because that ignores the true size of the issue. They will discuss why people become trolls. They will talk about how to stop them.

Advertisement

Much hope rests on the summit to hammer out actionable solutions that social media companies can implement to combat abuse.

"When we're talking about harassment, more often than not it's feminists who already agree on everything and are just polishing each other's apples," said Brianna Wu, an influential female game developer and speaker at SXSW’s harassment panel. "I'm excited about SXSW because I do believe it will be a chance to put concrete policies in place that end harassment. But moving the ball forward doesn't always mean getting your way."

Advertisement

👿  👿  👿

Online harassment is like a rip current threatening to whisk us all out to sea: panic, you drown; fight the current, you'll drown, too. Without better tools for preventing it, the only way to survive is to stay afloat long enough for the current to die down.

Advertisement

Curious as to how the non-profit that FOIA'ed Folta emails felt about his being driven off the web by his harassers, I reached out to Gary Ruskin, the executive director of US Right to Know.

"I can’t speak to what other people are doing. I can only speak to my own work,” he told me over coffee, repeatedly emphasizing his lack of responsibility for Folta's harassment. “My own work is civil."

Advertisement

I asked Ruskin whether he felt that the personal attacks on Folta distracted from the scientific debate over genetically modified foods. Ruskin demurred. His mission is to expose scientists he believes are lobbyists for GMO corporations. He believes Folta is one of those scientists. Is the man that opens the gate responsible for what the mob does once it's inside the walls? On the internet as it exists, either Folta or Ruskin has to lose.

“I provide information and people take it and use it,” he told me, adding, “If you’re asking me if I believe in death threats, the answer is no.”

Advertisement

When I first reached out to Folta in December, he was still reeling from the attacks. His university declined to let him speak to me out of fear of the repercussions. Only in the last few weeks has he decided it's time to cautiously re-enter public life, reemerging on Twitter and relaunching his podcast and blog.

Advertisement

He has many regrets. He wishes he would have been more up front about his relationship with Monsanto. He wishes his university had turned down that $25,000 grant. But in the end it wasn't his actions or Ruskin's FOIA that tore apart his life and cost him career opportunities. It was the internet: the torrent of nasty Craigslist ads, harassing tweets, and threatening emails.

Moving forward, Folta has committed himself to obsessive transparency, documenting every coffee or donut anyone connected to the agriculture industry buys him.

Advertisement

"Disclosure needs [to] move beyond appropriate, and now be impeccably clear and obvious," he wrote on The Huffington Post. "Omission impossible."

Folta is an optimist. Justice, he feels, is just over the horizon.

“The facts and the data and the science will win eventually," he told me. "If they managed to keep me out of the public sphere for a few months, it’s only made me better and stronger.”