Part of these longstanding issues come from an inherent flaw in how the technology actually works. As HP explained in the 2010 debacle, it's harder for algorithms to "measure the difference in intensity of contrast between the eyes and the upper cheek and nose" on a darker subject.

Advertisement

But at least part of the problem might be the tech bubbles that this software is being developed within. A 2011 study found that software built in east Asia was better at recognizing east Asian faces, while software developed in North America and Europe is better at recognizing white faces.

The study likened this reality to the cringeworthy "they all look the same to me" comments that we sometimes hear people say when talking about people of a race different than their own. The findings suggest that "our ability to perceive the unique identity of other-race faces is limited relative to our ability to perceive the unique identity of faces of our own race," the researchers wrote.

Advertisement

Somehow, that inherent bias is getting transferred into software code.

The FBI insists that the information in the NGI won't be used to positively identify anyone, but rather that it is meant to produce a ranked list of suspects in criminal cases. As such, it claims that "there is no false negative rate" for the technology, as it told the Electronic Frontier Foundation, one of the signatories of the letter, back in 2014.

Advertisement

Even considering this, there's still "a very good chance that an innocent person will be put forward as a suspect for a crime just because their image is in NGI—and an even better chance this person will be a person of color," noted the EFF in a blog post published alongside yesterday's letter.

Advertisement

The FBI crime data is already flawed in ways that can affect those who have come in contact with the system. In a 2013 report, the National Employment Law Project found that about half of the FBI records are missing information on whether the person was ultimately convicted or acquitted of a crime they were accused of committing. In many cases the report detailed, this has led to people being denied employment based on arrests for crimes they were later cleared from. Because of disproportionate arrest rates, these issues primarily affect people of color.

Without the protections of the Privacy Act, Tuesday's letter claims, "private citizens could never take [the FBI] to court" to correct the inaccuracies in the NGI data. Neither would the feds have to share where they are getting the data from (private security cameras? state-issued drivers license photos? employer-mandated background checks?). Nor would they have to share how the data they collect on us is being used.

Advertisement

Communications between the federal government and other law enforcement agencies have long been mired with systemic problems. Overall, it's probably a good thing that the FBI is heading this effort, using cutting edge technology and putting all levels of law enforcement data into the same information sharing system.

But that progress shouldn't have to come at the expense of transparency, or of the government having to tell us how they are using the information they collect. Especially when the FBI has in the past mentioned that it would like to deploy the technologies at "critical events," which some suggest might include First Amendment-protected political rallies.

Advertisement

It's taken eight years to get a proper privacy notice published about this new frontier of policing technologies. At the very least, it's worthy of another month of our attention.

Daniel Rivero is a producer/reporter for Fusion who focuses on police and justice issues. He also skateboards, does a bunch of arts related things on his off time, and likes Cuban coffee.