Google scientists are tackling the most dangerous AI out there: Roombas having accidents

Latest

A group of researchers from Google, Stanford, U.C. Berkeley, and OpenAI have released a new paper on the need to face our most dire artificial intelligence threat: cleaning robots.

The new paper, titled “Concrete Problems in AI Safety,” outlines various problems with robots having accidents, along with potential solutions. In order to illustrate the different sorts of trouble a robot could get into, the researchers rely on the example of “a fictional robot whose job is to clean up messes in an office using common cleaning tools.”

The paper outlines five broad types of potential problems that need to be avoided:

  1. Robots fulfilling their functions but harming their environment, like a cleaning robot knocking over a vase;
  2. Robots reward-hacking rather than doing their jobs, as in a cleaning robot simply covering up a mess;
  3. Robots not adjusting to oversight, like bugging a human for instructions too often;
  4. Robots failing to be careful of their environments, like sticking a wet mop in a socket; and
  5. Robots failing to adapt to environments that are unlike the ones in which they’ve been trained.

It’s not exactly Asimov’s three laws, but it is reflective of the world we live in. After all, cleaning robots have been known to have some nasty accidents, as when one ate a woman’s hair while she slept.

The paper offers some broad potential solutions, such as teaching machines in simulated environments to avoid dangerous mop-related behavior and programming cleaning robots to avoid getting too close to items they might knock over.

Beyond the funny choice of a naughty OCD robot, the paper is a refreshing break from the outsized attention paid to far-fetched concerns about artificial intelligence such as the possibility we’re living in a simulation, are at risk from a future super-intelligence, or face a future where the world is deconstructed by rampaging nanobots. A post published on Google’s research blog by co-author Chris Olah gives the impression that the authors agree that a lot of talk about AI safety is a little out there.

“[M]ost previous discussion has been very hypothetical and speculative,” Olah writes. “We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

This is true! Especially since we’re increasingly going to be faced with more practical AI problems, as when an automated car’s safety measures fail to properly deploy, or a self-driving cars’ calculations go wrong and lead to a crash.

It’s not surprising that this sort of work is on the Google researchers’ minds. Parent company Alphabet has been pouring money into research on self-driving vehicles and has an entire team dedicated to machine learning (the three co-authors from Google on this report belong to that team). But regardless it’s nice to see researchers looking at the realistic problem of robots knocking things over instead of robots conquering humanity.

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at [email protected]

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin