What we need Elon Musk's billion-dollar AI non-profit to actually do

This image was removed due to legal reasons.

We live in fear that evil, Terminator-like AIs will take over the world and kill us all, even though experts say we're nowhere near that. But what we should really fear is that Agent Smith AIs will take over the banking industry and stop approving all loans to the socioeconomically disadvantaged.


When Elon Musk, founder of Tesla and SpaceX, announced Friday his helping to fund Open AI, a new billion-dollar non-profit that will develop open-source artificial intelligence, he sounded the same old concerns about making sure AI doesn't start killing people. "We're going to be very focused on safety," Musk told tech writer Steven Levy. "This is something that I am quite concerned about."

He's right to be worried about safety. The Associated Press reported freak incidents of surgical robots slapping patients on the operating table in 2013. And this year, a worker in a Volkswagen plant was killed by a "robot" and Musk's own Tesla released an auto-pilot beta feature to its S model that drivers immediately began testing on open roads, leading to some dangerous encounters.

But some of the other disturbing AI mistakes we've seen this year have been discriminatory rather than unsafe, such as Google Photos' AI mistakenly classifying photos of African-Americans as gorillas and a Google algorithm showing women lower-paying jobs than men. Ideally, OpenAI, whose stated mission is to "benefit humanity as a whole, unconstrained by a need to generate financial return," would set its sights on making sure not just that AI is safe, but that it doesn't magnify social and cultural biases we're trying to overcome.

"It's a very challenging problem," said Matt Zeiler, CEO of Clarifai, an AI startup that just released a photo organization app, when I asked him about the Google Photos gorilla fail. "The models don't know anything about social norms. They just see a bunch of numbers."

The exact projects that OpenAI will go after haven't been disclosed. But in the AI community more generally, most research is focused on physical safety issues. Google has a team of people working on making sure its self-driving cars behave well in unpredictable situations. DARPA launched a program to ensure new networked systems are safe and secure. Musk is also bankrolling the Future of Life Institute, an organization focused on AI safety and ethics that called for a ban on autonomous weapons earlier this year.

What's not getting enough funding or attention is giving our artificially intelligent friends some serious sensitivity training. As artificial intelligence moves from roads to homes, schools and hospitals all over the world, it'll become increasingly important for these technologies to behave themselves and to be attuned to cultural and social sensibilities.


Racial profiling, for instance, hasn't been tackled head-on by the Future of Life Institute, Bart Selman, an AI expert at Cornell University and a recipient of an Institute grant, told me earlier this year. “It will fall under the broader umbrella of how to properly constrain AI systems to be ethical, predicable, non-discriminatory, and​, in general, ​conform ​to our societal standards," he said.

Societal standards, though, are set by people in power, and those people tend to be white. They may live in ignorance of the prejudices that non-whites face everyday. How can they constrain something with which they're not familiar?


As AIs become better and cheaper, they'll play a bigger role in our lives—like in stock trading, price-setting algorithms, customer call centers and more. The potential for discrimination, fraud and danger is tremendous because the technology will affect millions, if not billions, of people. In tech speak, they'll be built to scale.

"One way to think of the problem of highly-scaled power is that the consequences of mistakes can be exponentially greater—whether it's going from rocks for weapons to AK-47s, or going from stock trades via telegraph to bots that can do hundreds of thousands of trades per hour without human oversight," said Alan Kay, a computer science pioneer and an advisor to OpenAI, in an email.


Marketers can accurately target low-income populations with high-interest loans with the help of algorithms, for instance. When Latanya Sweeney, a computer scientist and the director of Harvard's Data Privacy Lab, typed her name in to Google, ads for companies that do criminal background checks, implying she might have been arrested previously. She's African-American. She did a study on 120,000 names and found searches for names popular among African-Americans were more likely to have ads for services that indicated the person had a criminal record.


We need to keep these practices in check, lest we enter a new digital era of unprecedented racial and socioeconomic subjugation. Having a diverse set of perspectives is one place to start.

The potentially beneficial thing about OpenAI is that if the systems it builds end up powering physical AIs, A.K.A. robots, or virtual ones, like a Siri or the "driver" in an autonomous vehicle, they would be open. Experts would be able to take a look at them and see how they worked—and what tweaks need to be made to make them safer, more efficient or less offensive, without having to adhere to non-disclosure agreements.


In its inaugural blog post, OpenAI's founding members, which include many tech luminaries, wrote that "we believe AI should be an extension of individual human wills." It's promised to release everything, except things that could pose a safety risk.

Experts are excited about open source and OpenAI. But it's important to remember that what OpenAI releases and when is up to a small, and not very diverse, group of people. The organization is mostly men, with expertise in one breed of AI called deep learning. All the announced backers, except for Y Combinator's Jessica Livingston, are dudes. Some, like Musk, have clear financial interests in the future of AI. The research group will be led by Ilya Sutskever, a former Googler, and will include seven researchers from top AI hubs, including Stanford, UC Berkeley, Facebook and NYU. Two, Vicki Cheung and Pam Vagata, are women.


To be fair, that's the sad state of AI and tech, in general. Facebook's famed AI research lab has 48 researchers, 5 of whom are women, as of this writing according to its employee roster. (They'll be one down since Vagata is heading to OpenAI.) The prestigious Canadian Neural Computation & Adaptive Perception group, often credited with bringing about the new AI renaissance, is all men. At the AI conferences I've frequented, women and minorities have been largely outnumbered by men.


Technology can't serve "humanity as a whole" if its creators aren't reflective of its users. We've begun to see some issues arise with simpler technologies as a result. VR headsets don't work for women as well as men because they were engineered by and tested on mostly male users. The digital heart rate monitors people wear on their wrists aren't as accurate for dark-skinned people as they are for white folks. Databases for genetics, which will be necessary for AI systems to learn what goes wrong in genetic diseases, are not very diverse. AI-powered technologies will reach many more people than today's mobile gadgets, so the repercussions, as Kay suggests, will be much greater.

Human wills and harms aren't homogenous. Values differ among cultures. They differ among genders and ethnic groups. They differ from country to country, even from neighborhood to neighborhood. Our AIs should reflect that too. Otherwise, they'll have cultural blindspots. Again, we can't expect them to treat everyone equally and fairly if their makers don't reflect our diverse society. That's a challenge, I hope, OpenAI takes into account as it beefs up its staff and starts building stuff.


Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.