Study: We're Teaching Artificial Intelligence to Be Just as Racist and Sexist as Humans

Latest

We live in a world that’s increasingly being shaped by complex algorithms and interactive artificial intelligence assistants who help us plot out our days and get from point A to point B.

According to a new Princeton study, though, the engineers responsible for teaching these AI programs things about humans are also teaching them how to be racist, sexist assholes.

The study, published in today’s edition of Science magazine by Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, focuses on machine learning, the process by which AI programs begin to think by making associations based on patterns observed in mass quantities of data. In a completely neutral vacuum, this would mean that AI would learn to provide responses based solely on objective, data-driven facts. But because the data sets fed to the AI are selected and influenced by humans, there’s a degree to which certain biases become a part of the AI’s diet.

To demonstrate this, Caliskan and her team created a modified version of an Implicit Association Test, an exercise that tasks participants to quickly associate concrete ideas like people of color and women with abstract concepts like goodness and evil. You can take one right now if you want to.

In the past, Implicit Association Tests have been conducted on human participants to measure how much more inclined people are to, say, associate black people with crime or women with domesticity. While it’s interesting to think of IATs as being able to give us a glimpse into what kind of people we are, it’s widely known that a person’s individual results can vary widely between different tests and that, ideally, you would have to take hundreds, if not thousands, to get a more accurate read on your average results. This isn’t a problem for AI programs, though, who can chug through these tests with a speed far beyond the abilities of a human and provide clues as to how they came to make their decisions.

Caliskan’s team found that, because they were trained with text laden with everyday human biases, the AI were inclined to replicate those ideas. For example, the AI were prone to assigning names more commonly associated with black people with negative connotations. While these findings might seem objectively innocuous in a lab, we have to take into account that as AI programs become more widely used in broader applications like, for instance, sorting through job applications, biases like these could pose unique difficulties for certain marginalized groups.

In a followup description of her methodology, Caliskan insists that these sorts of problems are a fact of life for the way that machine learning works currently, but that there are solutions that can be implemented in the future. For one, it would be beneficial to hire more AI developers who are people of color and/or women and have a more acute awareness of these biases from the jump. But also, Caliskan said, there’s a need for transparency as to how these machines are being taught.

“Most experts and commentators recommend that AI should always be applied transparently, and certainly without prejudice. Both the code of the algorithm and the process for applying it must be open to the public,” she explains. “Transparency should allow courts, companies, citizen watchdogs, and others to understand, monitor, and suggest improvements to algorithms.”

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin