It's good news that Google's self-driving car caused an accident

Latest

Google’s software caused a traffic accident? Good.

The minor scrape took place on February 14, in Mountain View, California. A Google self-driving car was in the right-hand turn lane, wanting to turn right, but then found its way blocked by sandbags, which meant that it had to merge back into the traffic on its left. It allowed “a few cars” to pass, and then decided to squeeze in ahead of a public transit bus. But the bus driver didn’t hit the brakes, which would have allowed the Google car to merge. The result? “Body damage to the left front fender,” according to the accident report, and a lot of unexpected work for Google’s PR team.

This accident should not only have been expected, it’s entirely welcome. Google’s self-driving cars have driven more than one million miles to date, with a safety record far superior to any human who might have driven that much. But if any car drives for long enough, it will eventually get into an accident that is at least in part its own driver’s fault.

Google’s internal report into the incident says that “we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.” But that’s true at a much bigger level: the only way to avoid any kind of collision is not to move. Which pretty much defeats the purpose of being a car.

In this case, the Google car was undertaking one of the most complex social interactions that ever face any driver. Two lanes of traffic had to merge into one, and the lane on the left had right of way, and the Google car was in the lane on the right. Given that this is Mountain View, if the Google car waited for a wide-open space before it entered the left-hand lane, it could have been waiting for hours.

Which means that the Google car, in this situation, did exactly what it should have done. To be sure, it could have simply sat there, motionless, for an hour or so, until the way was unequivocally clear. But that’s not what anybody wants self-driving cars to do. They need to follow the rules of the road, but they also need to get to where they’re going, and display the minimum level of aggression necessary to achieve that goal.

999 times out of 1,000, a car as careful as Google’s will be allowed into traffic without any problem. But every so often a bus driver will be distracted, or annoyed, or careless – and, in that case, the Google car will be at fault in an accident. It’s safe to say, however, that if the bus had been a self-driving Google bus, this accident would not have happened. It was clear what the car was trying to do, and a constantly-alert bus driver would have let the car do exactly that.

Obviously, every time there’s any kind of accident involving a Google self-driving car, Google needs to take a very hard look at exactly what happened. But even when the car is legally at fault, that doesn’t mean there was some kind of software error. The only way to avoid this kind of accident entirely is not to drive at all.

Greg Ip, the chief economics commentator at the Wall Street Journal, is no expert on self-driving cars, but he has written a book called Foolproof: Why Safety Can be Dangerous and How Danger Makes Us Safe. “It’s ridiculous to aim for a zero-accident car,” he says: “a zero-accident car is no car, which is not an optimal solution.” The important thing is to be significantly safer than human-driven cars, rather than to try to achieve no accidents at all.

What’s more, it doesn’t make much if any difference which car is legally at fault. If a Google car brakes too sharply at a red or yellow light and ends up getting rear-ended, the car behind it is legally at fault, but the Google car still caused the accident. The goal should be to minimize accidents, rather to get caught up on legal liability. At some point, we’ll move to a world of 100% self-driving cars, and at that point complex interactions with human drivers will be a thing of the past. But for the time being, driving technology has to work with human fallibility, regardless of what the various drivers should be doing.

So let’s celebrate the fact that Google has built enough risk into its cars that they occasionally cause an accident. (And let’s celebrate, too, the fact that when that accident finally occurred, it caused only a tiny bit of damage, and that all of the damage was to the car itself.) If the Google self-driving car is to become a useful piece of technology, it needs to take risks occasionally. The Valentine’s Day fender-bender proves that it’s doing exactly that.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin