When your driverless car decides who lives and who dies

By
We may earn a commission from links on this page.

Imagine you're riding along in your autonomous car on a crowded two-lane street. You're tweeting, checking Facebook, and Instagramming some cool buildings as you pass by. Basically you're enjoying the modern joy of a driverless world. Then you notice that oh crap….Houston, we have a problem. A school bus full of happy-go-lucky third graders is about to ram right into you and your robo-Prius. There's only one option: swerve right to spare the kiddos. BUT that entails ramming into your granny instead.

You might choose to go for the bus and spare your loved one. But in a utilitarian world where supposedly robots will know best, will they choose the path to the least people harmed? That's the question bioethicst Ameen Barghi posed today. (The question itself isn't new. Ethicists have been asking variations of it for a while, but Barghi reimagines it in the context of driverless vehicles and artificial intelligence.)

The philosophical exercise does bring up some potentially important legal questions. If the software controlling a robo-car chooses to spare your grandmother and take out the kids instead—after all it's learned your preferences, likes and dislikes—who's liable? You, the software manufacturer, or the automaker? Are you or the machine on the hook for murder? What happens if the software malfunctions at a crucial moment? Right now, the law is murky on this topic, at best.

Advertisement

We're already having to come to grips with some of these scenarios, with supposedly self-stopping cars hitting pedestrians and non-robo-cars getting into accidents with Google self-driving cars.

Until the law catches up with technology, one way to circumvent legal quagmires would be to have the car notify you (and the adult riding the bus with the kids, assuming that's on autopilot as well) that something is amiss. Then humans and not machines would make the ultimate decision on what to do.

Then, there's also the ethical questions of how you program "morality" into a robot. Everyone has a different moral compass, and each situation is different. In this case, most people choose to save the group and kill off the individual, but whose ethical preferences and interests will be represented in the software that we interact with on a daily basis?

These are questions that we'll have to ask ourselves more and more in the coming years. For now, you can

Advertisement

You can vote on what decision you think is "right" here.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.