Scientists Teaching Robots Right From Wrong

Latest

In a move that seems inspired by Isaac Asimov and his three laws of robotics—among these is the maxim that “A robot may not injure a human being or, through inaction, allow a human being to come to harm—a group of scientists are now tackling the huge task of programming autonomous robots to learn the difference between wrong and right.

The project is a collaboration between researchers at Tufts, Brown, and Rensselaer Polytechnic Institute, and has received funding from the Office of Naval Research as part of the Multidisciplinary University Research Initiative.

The researchers are hoping to impart moral decision-making to the robots by using various hypothetical scenarios. One such example, according to Tufts Now, asks whether a robot can cause pain to a soldier who’s been injured in order to potentially save his life. The goal is for robots to develop a more complex ability to think for themselves, even when their programming doesn’t include a clear-cut solution.

“When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario,” explained Selmer Bringsjord, head of the Cognitive Science Department at RPI.

The more we understand the complexity of our own moral cognition and the elements that form part of it, the easier we can transfer these basic rules and instill them into the robots of the future, and the sooner they’ll be able to enslave us.

Julian Reyes is a VR Producer for Fusion.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin