The problem with neighborhood policing apps

Latest

Last spring, a new app launched in New Orleans that allows people living in the French Quarter to report crimes and take photos of suspicious people. The app and program, paid for with $500,000 from a local real estate developer, has very real consequences: off-duty police officers are paid $50/hour to check out reports of crime or suspicious activity as determined by anyone using the app.

Called the French Quarter Task Force (FQTF), it’s been hailed as a massive success. NBC News, which called it the “Uber for cops,” reported in December that it had led to a 45% decrease in violent crime. This is despite the fact that the New Orleans Police Department says the app “is meant to give people a way to report nonviolent crimes, like suspicious people.”

The app may now start spreading. A community policing group in St. Louis is looking into a trial run with it, depending on how a visit to New Orleans in May goes, reports the St. Louis Post-Dispatch.

The problem with the app, as ThinkProgress’s Carimah Townes points out, is that it leads to “over-policing and targeting the wrong people.” The wrong people are usually people of color. In case after case apps designed for community policing inevitably reflect the racial biases of the community members doing the policing.

In 2014, San Francisco’s public transit system created an app called BART Watch designed to let passengers “easily report crimes, suspicious items or activities” on the city’s subway system. A year later, the East Bay Express obtained a month’s worth of data from the app, and found that 198 of 793 reports noted the race of the person(s) being reported and that 68% of those reports were about black riders, who make up only 10% of BART’s ridership, according to a 2008 study. Many of the reports were for behavior that wasn’t criminal. “Playing music, singing, dancing, talking loud or yelling, and taking up more than one seat [were flagged] as ‘disruptive behavior’ by a Black person that warranted a police response,” reported the Express.

Not long after that, in October 2015, The Washington Post reported on “Operation GroupMe,” a program in the Georgetown neighborhood of Washington, D.C. designed to let shopkeepers share information with each other and with police. The program disproportionately and surreptitiously targeted black shoppers for quiet social surveillance. The Post reported that 70% of the “suspicious” people discussed through the program were black. Participants also uploaded hundreds of pictures, usually taken without permission, of people they suspected.

Even without police involvement, such programs tend to become magnets for racism. SketchFactor and GhettoTracker.com became crowdsourced hubs for (not-so) coded observation of and commentary on communities of color. The two services allowed users to rate and map, respectively, areas of their city that were “sketchy” and ought to be avoided. In the case of GhettoTracker, the app’s creator was surprised at pushback on the name and purpose of the app.

The social network Nextdoor, which lets neighbors communicate and crime-spot, regularly stirs up disturbing conversations about race. Last year Pendarvis Harshaw reported for Fusion on problems with a Nextdoor community in Oakland: black friends visiting a white friend’s house were reported by neighbors as “suspicious” and “sketchy” simply for hanging out while trying to find the house they were visiting. A community meeting was planned to discuss racial profiling on the site, except the meeting was only open to white people.

The reality is that these apps are crowd-sourced extensions of surveillance that America’s black population is subjected to on a day-to-day basis.

As Kristin Henning, a former public defender and director of Georgetown University’s Juvenile Justice Center, explained last month at the Color of Surveillance conference in Washington, D.C., black Americans and black boys especially “are born into a life of surveillance.”

Henning described how this begins at a young age, and explained how even a 16-year-old client of hers might know “that he is being watched” at home, in his car, at work, and at school. Schools in particular are increasingly militarized, with officers carrying guns up to and including assault rifles.

And the data bears out that police in schools are overwhelmingly watching black students. Henning cited schools in Jefferson Parish, Louisiana, which borders New Orleans, “where African-American students accounted for 80% of students who were arrested and referred to the police, although they make up only 41.5% of the students in the school.”

Of course, the problem doesn’t end with graduation. She also noted some jarring statistics about who gets stopped by New York City police.

“In 2014, New Yorkers were stopped by the police 46,000 times. And of those 46,000 times, 82% of the persons were innocent. 53% were African-Americans and 27% were latino,” she said. “And in 2015 when New Yorkers were stopped only a mere 23,000 times, 80% were still found to be innocent. 54% were black and…11% were latinos.”

Such stops often occur “based on ridiculously vague descriptions such as ‘black boys running’ or ‘black male in jeans and a hoodie.'”

The crowdsourced policing model often has ties to traditional law enforcement. As in New Orleans, many programs rely on off-duty cops paid as private patrollers. Its advocates often come from law enforcement: the head of the Neighborhood Security Initiative hoping to test the app in St. Louis was a police officer for over 20 years. While it is legendarily difficult to get information from large police departments like the NYPD, it is hypothetically possible. With private groups like the FQTF, on the other hand, who have yet to respond to my inquiries for this piece, there is no federally-mandated requirement that they comply with information requests.

It’s possible that these sorts of services can be directed towards more fruitful discussion, but only with transparency. It takes neighbors calling out racism, which Harshaw also saw on Nextdoor; reporting on the flaws in app design that encourage racial profiling; and analysis of the data to assess demographics of innocent people being reported on. In the case of private groups like FQTF, such data isn’t necessarily available.

Transparency, unfortunately, seems to be in short supply. In early March, WVUE reported that FQTF’s wealthy backer was likely to receive more control over the program. Real estate developer Sidney Torres was described as wanting a tighter leash on officers.

“This task force is supposed to be quick response time,” he told WVUE, “engage with the public, proactive not reactive and just make sure those things don’t happen.”

In other words, ‘suspicious people’ are supposed to just disappear.

Learn more about technology, race, and the future of policing from our Real Future episode on predictive policing.

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at [email protected]

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin