The government wants Silicon Valley to build terrorist-spotting algorithms. But is it possible?

Latest

Last week, a bunch of important people from Washington, D.C. packed their bags and flew to California to meet with a bunch of important people from Silicon Valley. The occasion was not the usual fundraising dinners or donor-wooing — it was a terrorism summit.

Government types are freaked out about the role of technology in how groups like ISIS recruit members and plan attacks. They think the heads of tech companies like Facebook, Apple, Twitter, and Google can do more to help them keep the world safe. And so counter-terrorism officials got tech executives to spend a day with them in San Jose last Friday. Among the topics on the agenda were consumers’ access to encrypted communications that aren’t easily intercepted by the government (a horse that’s so beaten to death that it’s been zombified) and a new idea posited by the policymakers: some kind of technological system that could detect, measure, and flag “radicalization.”

A terrorist-hunting algorithm isn’t a completely off-the-wall idea. Financial companies have proposed scanning Facebook postings to help determine people’s credit worthiness. There are already products for police departments that dole out “threat scores” to individuals based on scanning public social media activity and looking for key words; one police department’s use of a “beware algorithm” was recently revealed by the ACLU of Northern California.

But this proposal that tech companies might give their own users “radicalism scores” is more novel, and comes on the heels of the San Bernardino shootings, after which law enforcement discovered that one of the shooters had posted to Facebook advocating jihad.

A White House memo that went out to summit participants before Friday’s briefing acknowledged that such a system would raise privacy and civil liberties concerns, and that it’s “unclear” whether radicalization is as easily measurable as credit scores. But the memo said that “such a measurement would be extremely useful to help shape and target counter-messaging and efforts focused on countering violent extremism.” The Guardian reported that it was compared during the meeting to Facebook’s attempts to prevent users from committing suicide:

The social network’s chief operating officer, Sheryl Sandberg, walked government officials through how Facebook currently enables users to flag people who appear to be posting suicidal thoughts, a person familiar with the conversation said. The government officials in the room wondered if such a system could be used to flag terrorist content or detect a user who appears to be radicalizing, added the person, declining to be quoted on the record.

Facebook suicide prevention system is the digital equivalent of “See something, say something”—except it asks users to “See something, flag something.” (A more appropriate comparison might be Facebook’s automated scanning of Facebook activity to bust sexual predators.) But is flagging radical thoughts as easy as suicidal ones?

“It’s tricky to measure radicalism when someone hasn’t committed a crime,” said Gary LaFree, director of University of Maryland’s academic center START, which studies terrorism. “When you dive into that, it gets very controversial.”

START built a database of 1,500 radicalized individuals—people arrested, killed or convicted while in pursuit of extremist ideologies—to try to figure out how they became far-left, far-right, or Islamist extremists. LaFree said when START first started looking at radicalization 12 years ago, researchers didn’t have a good understanding of how radicalism happens. They thought it was a gradual process, in which someone steadily becomes more and more radicalized, and then decides to act. They thought there were strong ties between radical thoughts and radical action. But now they realize that’s not always the case, he says.

“Sometimes very radical thought doesn’t lead to action,” he said. “But then someone else will be loosely connected to a radical group, played soccer with someone for example, and then they’re willing to do something very radical. We understand the different pathways to radicalization better now, but it makes it even more complicated.”

LaFree says the science of radicalization predication is still developing, and so far, hasn’t really involved social media analysis. Most radicalization measurement focuses instead on people’s psychological and sociological backgrounds. Lone terror actors, for example, says LaFree, are more likely to have criminal activity in their pasts, psychological issues and military training.

“We’re learning more about the background things that predict radicalization, more than we knew 10 years ago. But we’re not at Minority Report yet,” LaFree said.

An algorithm designed to spot radicalized individuals, at this point, would generate a considerable number of false positives. There’s also the question of whether the tech giants the White House met with—Google, Facebook, Dropbox, LinkedIn, Twitter, Apple and Cloudflare—could actually build what policymakers want. As vaunted as “big data” is, it still struggles with sentiment, image and word analysis. Facebook and LinkedIn try to force us to friend our exes. Google’s artificial intelligence for photos labeled black people as gorillas. Are these same companies up to the task of identifying potential terrorists?

Akli Adjaoute, the CEO and founder of Brighterion, a company that initially used artificial intelligence to detect credit card fraud, says his firm has developed a tool that will be rolled out in Europe next month for use in transportation and border security. Adjaoute, who has a PhD in artificial intelligence, says his firm developed a tool that works by doing contextual word analysis and relationship analysis, and tracks social media users over time, doling out alerts if they pass a certain level of radicalization.

“Instead of processing credit card transactions as fraudulent, we process every tweet and message as radical or not,” he said. “Safety is number one for me. People talking about privacy don’t understand the risks.”

Automated pre-crime analysis like this has historically been controversial. And at this point, there’s not clear evidence to suggest it works. But with terrorist groups like ISIS making greater use of social media, there’s great pressure on the government to address their activity there.

“On the internet, the advantage is on the side of the terrorists,” says LaFree. “They have hundreds of free workers. We have to pay [federal workers] to look at everything and are going to look like Big Brother if we do it.”

Or, the government can try to get technology companies to do it instead, so that it’s Facebook and Google doing the watching, instead of Big Brother.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin