Emergency text alerts will get longer, but that won’t stop the spread of misinformation

Latest

After the bombings in Chelsea in mid-September, Donald Trump said the U.S. was making itself vulnerable to terrorist attacks by not profiling people from “that part of the world.” It was easy to dismiss this statement as outlandish, absurd, and racist on its face. But, as a means of anti-terrorism, similar preconceived notions about race and religion have nonetheless influenced our government’s surveillance techniques. This was crystallized by the startling mass text sent to New Yorkers two days after the Chelsea bombing:

The alert’s curt nature quickly drew criticism, as many said that it encouraged New Yorkers to be suspicious of Muslims. Useful identifying information like Rahami’s height, weight, marks like tattoos, etc. were all missing. Going only on the information in the text (which is all the information most New Yorkers had) any young “brown person” could have fit the description.

Prominent app developer and writer Anil Dash likened it to a “scary alarm” that’s “optimized for panic,” while New York Magazine’s Brian Feldman said it “essentially deputizes the five boroughs and encourages people to treat anyone who looks like he might be named ‘Ahmad Khan Rahami’ with suspicion.” Rahami was eventually found when a bar owner recognized his face from CNN coverage.

Future text warnings won’t need to be so scarce with details. After years of pushback from phone companies, the FCC announced Thursday it would extend the character limit for these emergency alerts from 90 to 360 characters and allow for embedded images.

Senator Charles Schumer, in his proposal to update the service, said, “The bottom line is that in the era of Instagram, Facebook and SnapChat our Wireless Emergency Alert System needs to get as smart as our phones and be updated so it can deliver photos and other media that has information that can save lives.”

The update is designed to allow for greater context and information so that anyone “deputized” to look for suspects won’t have to fill in the gaps with preconceived notions. But the data gleamed from previous surveillance programs shows that fixing this problem is more complicated than it seems.

Consider the most well known crowdsourced surveillance system, “See Something, Say Something,” which has become a New York staple since it started appearing on buses and subways in 2002. The 24-hour tip line for reporting suspicious activity has the same problem as mass text alerts: misinformation. The overwhelming majority of tips called in to “see something” are erroneous, in-actionable and, when The New York Times examined it in 2008, more people had been arrested for making purposely false calls to the service (5) than arrested for plausible terrorist connections (0).

Because there’s no simple way to coordinate and prioritize recorded data, officers are often overwhelmed with (largely useless) information. The ACLU obtained a 2009 Congressional Research Service report on nationwide suspicious activity reporting system that concluded, “The goal of ‘connecting the dots’ becomes more difficult when there is an increasingly large volume of ‘dots’ to sift through and analyze.”

And since most tips lead nowhere, there’s always the possibility that a useful one will be ignored. Sociologist Harvey Molotch’s Against Security, describes how crowdsourced systems have their own integrity compromised by misinformation, reported purposely or not: “When people don’t tell the truth about security, when they give false impressions, it’s a real danger to believing in the system when in fact it does tell the truth.”

Crowdsourced surveillance techniques, by their very nature, rely on biased and potentially prejudiced information. Studies have shown that witnesses’ visual perception and recollection of events are influenced by race and can lead to false positives. Of course, there’s only so much that trained officers can do and relying on civilians for information is unavoidable. But we need to be mindful of the biases that come into play when heavily relying on crowdsourced data.

Crowdsourced surveillance techniques, be they digital or physical, are not benign; People know they’re being watched and adjust accordingly. The “chilling effect” of online surveillance was reported by The Verge in 2013 following the release of joint reports by the CUNY School of Law and the Muslim American Civil Liberties Coalition on the NYPD’s surveillance of Muslims post-9/11:

By apparently singling out Muslims, the surveillance program created fears that something as simple as criticizing the police on Facebook or making the wrong friends could end up leading to investigation, let alone doing anything that would be described as radical. “You look at your closest friends and ask: are they informants?” says one Sunday school teacher.

The fear of being misidentified as radical leads to inhibitive self-monitoring for Muslim Americans. Following the alert, as Feldman wrote, anyone who “looked like they might be named Ahmad” was, in effect, being surveilled in the physical world, as millions of people tried to identify a subject.

Adding more information to the text warning may or may not actually lessen false identifications. But the economy of data collection that shapes our lives creates a climate of fear and self-monitoring for Muslims that is re-energized by techniques like the mass text after the Chelsea bombing.

Speaking to The Guardian, Muslim American Ahsan Samad said constant surveillance has made him feel like he’s always under attack: “But you can’t even turn to the authorities. They are the ones doing it. I know I am supposed to have rights as a citizen, but I think they have a different rulebook for people like me, for Muslim Americans.”

Tellingly, the text message alert system was last used in the wake of the Boston Marathon bombing of 2013, which killed 3 people. Though it only alerted people of nearby safe zones, Reddit, Twitter, and other sites tried to crowdsource information by scouring pictures and identifying men carrying backpacks. They falsely identified and accused two innocent students, who ended up on the cover of the New York Post with the headline “BAG MEN: Feds seek these two pictured at Boston Marathon.” (The New York Post later settled a defamation lawsuit filed by the young men).

While crowdsourcing surveillance can be an effective tool in identifying suspects, the biases and prejudices it brings along with it can’t be avoided by upgrading technology. And neither can the consequences, to people of color and their communities, of crowdsourced paranoia.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin