Kent Hernandez/Fusion

The world has been worried about killer artificial intelligence¬†for some time now. There's an ongoing, somewhat heated¬†debate about whether¬†or¬†not the robots and AI that scientists are engineering will eventually decide to¬†take us out, like in this movie¬†(Spoiler warning, but only if you click). But this debate doesn't need to be limited to a¬†hypothetical future;¬†we have a¬†very real, immediate¬†threat in the form of autonomous weapons‚ÄĒkilling machines that would¬†select their targets¬†on their own.

The drones that have been used in America's wars abroad still have humans in the loop now, but humans could be engineered out of the decision to drop bombs, and that has leaders in the field worried. At the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, Elon Musk, Apple co-founder Steve Wozniak, and Stephen Hawking, along with 1,000 other AI experts, signed an open letter calling for a ban on autonomous weapons. Google DeepMind's Demis Hassabis and AI guru Geoff Hinton were also among the ban's supporters. From the letter:

AI technology has reached a point where the deployment of [autonomous weapons] is ‚Äď practically if not legally ‚Äď feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

That language is taken straight from an editorial AI expert Stuart Russell wrote in the journal Nature in May.

The Future of Life Institute (IFL), which sponsored the letter, has been a big proponent of robot ethics in recent months. At the beginning of the year, it published an open letter calling for researchers to focus their efforts on artificial intelligence that would benefit society:

The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. [emphasis mine]

Advertisement

Elon Musk is not just writing letters. He gave the IFL a $10 million grant, and on July 1, the nonprofit doled out millions of dollars to research focused on the potential risks of AI. One of the funded projects focuses on lethal autonomous weapons. Nature op-ed writer Russell was among the awardees, for a project on "Value Alignment and Moral Metareasoning." The largest single grant was given to IFL board member and ethicist, Nick Bostrom, who proposed a research center focused on the dangers of artificial intelligence:

The center will focus explicitly on the long-term impacts of AI, the strategic implications of powerful AI systems as they come to exceed human capabilities in most domains of interest, and the policy responses that could best be used to mitigate the potential risks of this technology. There are reasons to believe that unregulated and unconstrained development could incur significant dangers.

Although the debate has gotten more press lately, thanks to high-profile figures like Musk and Hawking taking notice, it's been ongoing for some time now. Earlier this year, the United Nations called for an international treaty that would ban fully autonomous weapons. In 2012, Human Rights Watch published a report stating "that such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death or injury to civilians during armed conflict."

Advertisement

Part of that "risk of death or injury" comes from the fact that AI systems make mistakes. Earlier this month, for instance, Google Photos mistook images of black people for gorillas. That's offensive and awful, but no one died as a result of the software flaw. In military scenarios, the stakes, as Stuart wrote in Nature, are high. People's lives are on the line.

In terms of the law, things can also get hairy. There's no clear rule yet on how we should handle robots that kill or break the law, University of Washington cyberlaw expert Ryan Calo has told me on multiple occasions. With international situations, things can even get trickier, because it requires multiple parties to agree on what is lawful and what isn't. The UK, for instance, is opposing a ban on autonomous weapons. So long as one country is holding out, it doesn't behoove the rest to cease developing killer machines.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.