How we should worry about artificial intelligence

Latest

In June 1972, the New York Times ran an article with the headline, Man and Computer: Uneasy Allies of 25 Years. It told the story of 50 computer scientists gathered in Princeton for a 3-day symposium. They were there to celebrate the 25th anniversary of the birth of the modern computer. But they also took the opportunity (I hope between beers) to discuss how computers would impact the future of the human race. “Many of those who spoke displayed a fear that insofar as the computer simulates thinking, it threatens the primacy of man,” wrote Boyce Rensberger.

Fast forward to January 2015. A group of about 80 AI experts, tech entrepreneurs, economists, ethicists and lawyers gathered in Puerto Rico for a roughly 3-day symposium. They met to discuss the future of AI, its opportunities and challenges. Some expressed fears that AI would go rogue and threaten the primacy of man. Participants signed a letter pledging to invest in research that would help us understand the ways in which AI could malfunction and how we could forestall that bleak future. It’s got the ring of a sequel: Man and Computer: Uneasy Allies, the next 50 years.

As with any good sequel, the plot had thickened: The academics in 1972 were largely talking about applications in science and mathematics. AI has busted out of the ivy-lined ivory tower. It’s now in the hands of consumers. We interact with it every day. And so it behooves companies building these systems to make sure consumers feel comfortable: that they know they’re thinking about how AI will affect their privacy, their safety, their jobs, their very existence.

There’s still another plot twist. Last year, DeepMind — a startup Google recently snatched up — became a geek-household name after it demoed a program that taught itself how to play various Atari games better than humans, sometimes by playing in unpredictable ways. DeepMind’a system was a mixture of AI tools that can learn from experience. An engineer, says Bart Selman, an artificial intelligence researcher at Cornell University, just codes the learning procedure. The program then scans lots of data and develops its own strategy to accomplish its task — in the case of the DeepMind program, to become the best Atari player ever. The how isn’t entirely clear.

“These systems can learn new kinds of behaviors or new ways of doing things,” says Selman. “We’re building systems that have these abilities that go beyond what we can clearly understand.”

That uncertainty seems to be why people are taking to the streets (or at least Twitter) talking about how ‘AI will doom the human race.’ In theory, these programs could learn to do other things, like develop better-than-human stock trading skills, driving maneuvers, or cold-blooded execution tactics that make Skynet look like child’s play.

So are mercenary robots—or their renegade, disembodied deep-minded cousins—what we should be worried about right now? Not quite. First off, the technology isn’t there yet. “We’re probably decades, if not longer, from any general-purpose intelligent system,” said Eric Horvitz, the managing director of Microsoft Research and brains behind Stanford’s One Hundred Year Study on Artificial Intelligence (AI100), which focuses on some of the ethical issues surrounding AI. Because we’re a long ways off from AI and robots that have common sense and are good at many tasks, scientists have time to develop methods for better predicting if an AI system will start acting up in ways that harm humans or break human laws. In fact, they’re already starting to do research into algorithms that would put the breaks on bad AI.

So if robo-killers aren’t going to blow us to smithereens any time soon, what should we be worried about now? Here’s a few of the more probable scenarios:

Thanks for taking my job, robot.

At a recent AI summit in San Francisco, AI expert Andrew Ng said that the focus on killer robots was a distraction from the more tangible effects AI would have on society.

What happens to people when their jobs are replaced by technology? Can we prepare people better for a knowledge-based economy? So far, our education system isn’t doing a very good job.

My robot broke the law. Now what?

By now, we’ve all probably heard about the ‘bot that purchased drugs on the internet. In that case, the ‘bot-bought ecstasy was confiscated, but what happens when computers start doing things that aren’t so visible? Take insider trading. “With the technology becoming more sophisticated and electronic, it becomes harder to see,” said Selman, the Cornel AI professor. “The trading system could figure out a way to collude and there could be trades made in microseconds that a regulator would have a very hard time realizing.”

How to make sure AIs don’t pull a Martha Stewart—or engage in other criminal acts—is becoming an active research area. The law also needs to catch up to the technology. “The capability [of AI systems] to display emergent behavior blows up certain assumptions that law has today about when to hold people responsible,” says University of Washington law professor Ryan Calo, who specializes in cyberlaw and robotics.

Are robots discriminating against me?

With systems that can draw inferences about us, are we in danger of being discriminated based on gender, race, or other part of our background? “What are the implications for privacy of systems that can make inferences about the goals, intentions, identity, location, health, beliefs, preferences, habits, weaknesses, and future actions and activities of people?,” wrote Eric Horvitz in the AI100 study white paper.

As with the financial fraud example, it might be difficult to figure out if this is happening if we don’t put the proper structures in place from go.

AI isn’t coming to the rescue fast enough

We’re all worried about AI taking us out, but in some cases, it can actually be quite helpful. Most hospitals, for example, still run antiquated systems that don’t track patients too well. That makes it hard to predict which ones will be at risk for an infection or relapse. If we could change that, Horvitz says, the impact on healthcare would be tremendous. “We know AI systems can save lives and reduce costs in healthcare, but we haven’t managed to put them into service,” says Horvitz. “And that converts to lives being lost.”

Siri-ous Business is a weekly look at how artificial intelligence is changing our world, for better or worse. But hopefully for better. Got a story? Shoot me an email: [email protected]. If I don’t answer, blame the bots.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin