Elena Scotti/FUSION

A group of researchers from Google, Stanford, U.C. Berkeley, and OpenAI have released a new paper on theĀ need to faceĀ our most dire artificial intelligence threat: cleaning robots.

The new paper, titled "Concrete Problems in AI Safety," outlines various problems with robots having accidents, along with potential solutions. In order to illustrate the different sorts of trouble a robot could get into, the researchersĀ rely on the example of "a fictional robotĀ whose job is to clean up messes in an office using common cleaning tools."

The paper outlines five broad types of potential problems that need to be avoided:

  1. Robots fulfilling their functions butĀ harming their environment, like a cleaning robot knocking over a vase;
  2. Robots reward-hacking rather than doing their jobs, as in a cleaning robot simply covering up a mess;
  3. Robots not adjusting to oversight, like bugging a human for instructions too often;
  4. Robots failing to beĀ careful of their environments, like sticking a wet mop in a socket; and
  5. Robots failingĀ to adapt to environments that are unlike the ones in which they've been trained.

Advertisement

It's not exactly Asimov's three laws, butĀ it is reflective of the world we live in. After all, cleaning robots have been known to have some nasty accidents, as when one ate a woman's hairĀ while she slept.

The paper offers some broadĀ potential solutions, such as teaching machines in simulated environments to avoid dangerous mop-related behavior and programming cleaning robots to avoid getting too close to items they might knock over.

Beyond the funny choice of aĀ naughty OCD robot, the paper isĀ a refreshing break from the outsized attention paid to far-fetched concerns about artificial intelligence such asĀ the possibility we're living in a simulation, are at risk from a future super-intelligence, or face a future where the world is deconstructed by rampaging nanobots.Ā A post published on Google's research blog by co-author Chris Olah gives the impressionĀ that the authors agree that a lot of talk about AI safety is a little out there.

Advertisement

"[M]ost previous discussion has been very hypothetical and speculative," Olah writes. "We believe itā€™s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably."

This is true! Especially since we're increasingly going to be faced with more practical AI problems, asĀ when an automated car's safety measures fail to properly deploy,Ā or a self-driving cars'Ā calculations go wrong and lead to a crash.

It's not surprising that this sort of work is on the Google researchers' minds. Parent company AlphabetĀ has been pouring moneyĀ into research on self-driving vehicles and has an entire teamĀ dedicated to machine learning (the three co-authors from Google on this report belong to that team). But regardless it's nice to see researchers looking at the realistic problem ofĀ robots knocking things overĀ instead ofĀ robots conquering humanity.

Advertisement

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at ethan.chiel@fusion.net