In ancient Greece, the philosopher Aristotle once said that "it is possible to infer character from features." Through the 1800s, scientists took his cue, pushing theories that bad people could be identified through their looks alone. Then, of course, science happened, and Aristotle's comment was taken with a mountain of salt.
But a troubling research paper about this very topic, written by scientists at the Shanghai Jiao Tong University in China and published by the Cornell University Library earlier this month, apparently didn't get the memo. The paper, titled Automated Inference on Criminality using Face Images, finds that you can accurately predict whether someone is a criminal or not based on a few key facial characteristics. It harkens back to a dangerous time when people actually believed this was the case.
The difference is that it uses the language of algorithms and contemporary computer science to do its dirty work.
The research, coming at a time when we stare down the barrel of a Donald Trump presidency, is alarming. With his narrative of rising crimes and rampant immigration (lacking a basis in fact), this is exactly the kind of cyber-sounding pseudoscience that he could use to implement a troubling agenda of instituting "law and order" that would likely come at a high cost to people of color.
In the paper, researchers detail how they took the images of 1,856 real Chinese citizens, 730 of whom were convicted criminals, and ran them through four different automated neural networks. The most successful of these networks was capable of identifying the criminals in the group 89.51% of the time, noted the researchers.
According to the researchers, the facial features that can accurately predict whether someone is predisposed to criminality: the curve of the lip, the distance between the corners of the inner part of the eye, and the "so-called nose-mouth angle."
Mix particular versions of these features up in a pot and you have a criminal, apparently.
"The most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds," the researchers write. And this kind of research should be further explored, "despite the historical controversy surrounding the topic," they add.
It's like they were aware of the dangerous territory they were trotting into, but they just couldn't help themselves.
The researchers envision a dystopian future where computers would be able to tell who is an upstanding citizen and who is a potential threat. The following paragraph is jarring in its implications:
Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc., no mental fatigue, no preconditioning of a bad sleep or meal. The automated inference on criminality eliminates the variable of meta-accuracy (the competence of the human judge/examiner) all together.
In recent years, researchers in the U.S. have separately been developing software that promises to help law enforcement predict people's future behaviors by developing complex algorithms based on data about the person's life and criminal history. Some states controversially use these algorithms to calculate people's prison sentences. Defenders of the technology put their full faith in the robots' decision-making, expressing a similar sentiment that the technology would remove human error from the equation. But in practice, the technology has been found to be racially biased, giving harsher treatment to people of color.
An algorithm is only as good as the data you feed into it, and decades of overpolicing people of color have lead to statistical inferences about their criminality that the technology simply reinforces. If you were to look for common, aggregated features of the American criminal, the end result would probably look like a black or brown person. Relying on these profiles only starts a feedback loop that leads to more overpolicing of these same communities.
The Shanghai Jiao Tong University researchers make no note of this inherent reality. But they do note that "criminals have a significantly higher degree of dissimilarity in facial appearance than normal population." In other words, the faces of non-criminals are basically heterogeneous. It's only the criminals that stray outside those societal norms.
In the case of this paper, the pseudoscience that the authors present is an updated version of phrenology—the "science" that once declared that it could make sweeping generalizations about people based on the shape of their heads.
U.S. politics, it should be noted, is far from immune to being swept up in this kind of snake-oil selling nonsense. In the 1920s, the notoriously racist eugenicist Harry Laughlin convinced the public that his research showed there were certain ethnic groups that were more predisposed to criminality than others.
The Immigration Restriction Act of 1924 was passed, largely based on his "science," which was, of course, later revealed to be deeply rooted in racism. Immigration quotas were put in place, which heavily favored immigration of "Aryans" from Western and Northern Europe, and significantly cut back immigration from Southern and Eastern Europe. Jewish and Italian immigration, which had been rising quickly, was immediately brought to a halt.
Modern Jewish writers credit the 1924 law as one of the reasons that the Holocaust happened: The U.S., by law, couldn't take in more Jewish people, even as Hitler rampaged across Europe in World War II. During one of the worst recorded genocides in human history, America's role as a place of refuge was taken off the table in the name of bunk science. But when he signed it into law, President Calvin Coolidge sang a different tune, one that might sound familiar to our abused, 2016 ears.
"America must remain American," Coolidge said.
When President-elect Trump steps into office, he will begin overseeing the FBI, which controls some of the world's most advanced facial recognition technology. The Shanghai Jiao Tong University paper is exactly the kind of troubling research he could rely on if he decides to push that technology in a very ugly direction.
Science is only as good as the data that goes into it. This study is seriously flawed and we have to hope that nobody takes it seriously, most of all, somebody in a position of serious power.
Daniel Rivero is a producer/reporter for Fusion who focuses on police and justice issues. He also skateboards, does a bunch of arts related things on his off time, and likes Cuban coffee.