Volkswagen isn’t the first company to use software to break the law and it won’t be the last

Latest

Volkswagen CEO Martin Winterkorn stepped down Wednesday because of Dieselgate, the scandal over the company’s installation of deceptive software in about 11 million diesel cars to help them illegally pass emissions tests.

The software was relatively straight-forward: during an emissions test, the wheels of a car spin, but the steering wheel doesn’t. No turning or jostling of the steering column, indicates the car isn’t out on a normal drive and that an emissions test is underway. That activated a defeat device that limited the harmful gas emitted by the car, allowing it to pass the test.

It was a conspiracy to sell cars as environmentally-friendly, when they were actually contributing heavily to our pollution problem. The company now faces billions in fines, as well as a potential criminal investigation by the U.S. Department of Justice.

That’s all bad news, but it’s worse: This is not the first time the car industry has used software to evade the law. And given the fluke that led to Volkswagen getting caught, other companies may well see coding law-breaking into their products as an innovation.

In 1998, the Environmental Protection Agency settled with Ford for $7.8 million because the car manufacturer had installed a “sophisticated electronic control strategy” in 60,000 1997 Econoline vans that let them spew harmful exhaust at high levels on the highway to give them a better gas/mile ratio. That same year, the EPA and the Justice Department charged diesel manufacturers $83.4 million for “installing computer devices” in heavy duty diesel engines that led to illegal levels of air pollution.

“Most engine control units have the hardware and the software in place to bypass or alter the [regulatory] strategy,” said Anna Stefanopoulou, a mechanical engineer at the University of Michigan. “It is a matter of deciding to cheat instead of playing fair [and] it has happened in all automobile sectors.”

The bad behavior always seems to come down to a preference for performance or fuel-efficiency over environmental concerns. VW opted for the former. And that helped it build cars consumers wanted, while optimizing the bottom line. It raked in more $$$ without having to invest resources in building cars that were fuel-efficient and green.

Its actions boiled down to a conflict of interest between the law and its own self-interest.

As devices—cars, TVs, smartphones, tablets, toys, robots—become smarter thanks to advances in artificial intelligence, the problem may get worse, as it may become more difficult to detect their bad behavior. The era of digital crimes looks like it’s just getting started.

“Here’s an extreme example: What if Amazon’s robotic warehouses could reorganize themselves to meet fire code requirements with very little advance notice of an inspection?” said Ryan Calo, a cyberlaw expert at the University of Washington.

Humans already do this. When we know an inspector or our bosses are coming, we try to look like we’re on our best behavior. But it could get more extreme as more warehouses trade in humans for robots that could “memorize” a list of checkpoints they have to meet, and make sure they can mobilize to meet these standards.

“Other places I’d be worried about include speeding cameras, voting machines, and slot machines,” Calo said.

In 2000, a bunch of hanging chads altered the outcome of the election. In the future, it could be programs that “eat up” votes of certain demographics. There’s already been allegations of voting machines being rigged. As the electoral process becomes more computerized, along with the rest of our lives, it’s going to become vulnerable to digital fraud.

And, again, it gets worse because the reach of digital misdeeds is vast.

“The medical field, including insurance, is definitely one area at risk. The financial field is another. In both sectors, secrecy and privacy constraints may make it even harder to detect wrongdoing. It may require new rules that hold software engineers more directly responsible for their actions,” Bart Selman, an AI expert at Cornell University, told me today. Coming up with legally-binding laws of ethics for engineers might help, he said.

Let’s imagine a scenario in which artificial intelligence gets really smart. It’s 2025, and a self-driving car needs to get its passenger to an appointment across town, 15 miles away. The speeding limit is 30 mph on the available roads, but the person is running late and needs to get there in 20 minutes not 30 minutes. The car’s Uber-like rating takes into account passenger satisfaction, and the car (and its owner) are rewarded with more rides, the better the rating is. So, it learns a few tricks, like rolling stop signs, not giving pedestrians the right of way, and speeding a bit, but only when it knows human police aren’t patrolling. And, oops, the memory shut down for an update, so, oops, there’s no record of the malfeasance.

“As everyday objects become smarter and more connected, we’ll need to worry about more nuanced evasions of law. We may also eventually worry about emergent behavior that violates the law without the engineer so intending,” said Calo.

How will policy makers, regulators and law enforcement keep up? So far, they haven’t been doing a very good job.

Bots scour the net buying tickets to scalp, despite several state laws being put in place to curb the predatory price gouging that they enable. In Tennessee, there hadn’t been any prosecutions as of 2014, despite the practice being widespread and the anti-bot law being six years old.

“There are actually two problems. The first is detecting the illegality,” Calo said. “The second problem is proving the engineers did it on purpose. Here, enormous pressure from the EPA led VW to admit its culpability. If it hadn’t, then we’d be in a situation like Toyota’s sudden acceleration where it took NASA to clear the software.”

Calo is referring to the 2011 investigation by the National Highway Traffic Safety Administration and NASA into unintended acceleration in Toyota vehicles. The 10-month long investigation found mechanical, and not software-related, problems were to blame. But the point is that these agencies had to divert resources, maybe resources they didn’t have, to figure out what was going on.

From recent reports, it’s clear that government agencies, like the EPA, the Food and Drug Administration, the U.S. Department of Agriculture, sometimes just don’t have the expertise or the resources to run full and exhaustive background checks on the products we use every day. Calo has suggested establishing a Federal Robotics Commission to oversee issues of software and robotic wrong-doing, but in the absence of that, we might have to rely on citizens and activists to help us keep machines honest.

Our best line of defense might be machines that police other machines. The goal of these so-called verifications systems for AIs would be to ensure other AIs don’t misbehave, like a digital hall monitor. Unfortunately, these things are largely in the research stage. It would require building a separate “AI system that watches another AI to make sure it doesn’t do anything harmful,” Selman said, and people haven’t yet figured out how to do that well. With people like Elon Musk pouring money into the field of AI ethics, this is becoming a bigger area of study.

But some government policies actually discourage people who could inspect company’s “proprietary” code and find these issues from doing so. The EPA opposed measures that would have made it easier for security researchers to examine the software used in trucks, cars and agricultural machinery, by making it exempt from copyright laws. The EPA denied the inspections on the grounds it might make vehicles more vulnerable to hackers. But it seems the bigger fear should be how vulnerable our environment is to deceptive cars.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin