Elena Scotti/FUSION

Crosswalk buttons in New York are a lie. So is the "door close" button in most elevators. That ticking clock telling you your download is 68% finished? Also a lie.

These devices don't do anything but help us deal with our impatience. Designers realized that we fare better psychologically when we think we have control or know how long we need to wait, so these features were designed to lull us into a false calm.

If the idea of deception being programmed into those technologies disturbs you, hold onto your Wifi-connected seat because it's going to get worse. Machines are going to be designed to be more human-like, and part of being human is the ability to lie and deceive.

Sometimes robo-slaves will deceive us for our own good: Take physical therapy robots that help patients trying to get a limb stronger. The robots will learn how much pressure the patients are capable of exerting and then "lie" to them, saying they're pushing less hard than they actually are, in order to get them to push to the next level. The idea is that people have ill-perceived physical limits that hold them back, so the robot is lying to them for their own good. It works because the patients assume the machines will always give them the real readings.

More disturbingly, some machines have seemingly developed the ability to deceive on their own, and to do so at humans' disadvantage. In January, Canadian researchers developed a Texas hold’em poker-playing algorithm they claimed is unbeatable. Its invincibility is due, in part, to its ability to bluff—or perform sanctioned deceptions. The researchers programmed the robot to know all the rules of the game, but it makes individual game-playing decisions by itself, its only instruction to make sure it maximizes its reward. Its deceptive behavior emerged on its own.

Advertisement

Google is using similar tools to train machines to play Atari games, achieving superhuman levels in some cases. This type of AI, dubbed reinforcement learning, allows robots to adapt to different situations, and it's not unlike the way humans figure out what action is best suited to a particular situation. If a fake smile gets us what we want, we learn to do it again and again.

The difference, though, is that robots can scan through millions of possible actions and outcomes much faster than any one human could. This creates a sense of unpredictability, and it's part of the reason some researchers and ethicists are worried about bad robots. They could become more adept than humans at lying. If we tell a stock-trading bot to maximize profits, will it achieve that goal through unethical means its creator didn't see coming?

"I think robots are awesome," said Woodrow Hartzog, a scholar at the Stanford Center for Internet and Society who specializes in human-robot interactions. "They can be the next great tool for human flourishing, but they can also be programmed to be machines of deceit and manipulation."

Advertisement

Carnegie Mellon University researchers are actually programming deceptive robots that "know" to fake left, in order to see how people react to being deceived:

The interesting outcome was how trusting we are of robots. Most test subjects were amused at the deception, and perceived the robot as doing it to make them laugh. We believe automated beings have our best interests at heart.

Advertisement

But that's not true, particularly with bots that masquerade as human. Just last month, Gizmodo analyzed Ashley Madison's leaked data and found there were more than 70,000 fembots on the site that lured customers into buying messaging credits. Hucksters constantly unleash bots on dating apps like Tinder and Grindr to "flirt" with people in an attempt to steal their credit card information or to get them to buy products on third party sites.

Virtual and physical robots are new territory for us in terms of detecting deceit. With a human, we can decipher when someone's being cagey. She might avoid eye contact or stutter while trying to come up with an answer. We don't yet have a roster of clues that tip us off when an AI is being deceptive.

But we're finding ways. In late 2013, a telemarketer with a human-like voice called customers hawking health insurance; undiscerning people may well have fallen for her sales pitch:

Advertisement

But when she dialed up TIME Washington Bureau Chief Michael Scherer, he noticed her voice wasn't quite right. Scherer asked "her" if she was a robot, and she denied it, but she couldn't answer questions that would be slam dunks for a human, like what vegetable is in tomato soup or what day of the week preceded the day she called him. The jig was up. But the rabbit hole may have gone even deeper: it may have been humans at the switchboard using a robot as a front.

Jeff Hancock, a Cornell University researcher who studies deception, thinks that if robots come across as shifty, in the way they behave or speak, we may be more inclined not to trust them.

"People believe other people. That's our default state," says Hancock. "One important question in front of us is: Will we have that same truth bias when it comes to interacting with a robot? This is going to be a big issue for designers."

Advertisement

Take, for instance, the new AI-powered Barbie which will be able to have conversations with the kids who play with her. The point of Hello Barbie, which is scheduled for release in November, is to convince kids that they're interacting with a friend. She asks them for advice and "expresses" emotions, suggesting there's something going on in her bobble-head. What happens if kids fall for it and develop a real attachment to this thing? Will they feel betrayed when they find out it's not a living being?

Scientists are increasingly looking at how to make us reveal more of ourselves to robots, whether it's having a robo-shrink say "uh-huh" in the perfect way or designing an adorable robot with huge eyes that talks like a kid. In those cases, it was for the human good, but you could readily imagine how those same design features could be used to prey on people, to scam them or get information out of them.

"They're not just a passive recipient like a website that collects your clicks," said Hartzog. "Robots, particularly when you invite them into your homes, will have much more detailed pictures of what appeals to us and what doesn't—and any kind of biases that would be easy for it to exploit."

Advertisement

It's not far-fetched to think that we'll count physical and virtual bots as friends. Millions of Chinese now chat regularly with Microsoft's chatbot, Xioaice. We'll have to keep in mind that these bots answer to two masters: us and the companies that created them.

"The key comes when you ask the question of whom does the robot serve," Hartzog said. "If we want them to be loyal to the people that buy them, we'll need to have a business model where the companies that make them won't exploit the people that buy the robots or their data by selling them to third parties."

The Future of Life Institute, an organization focused on making sure killer robots don't emerge, has started advocating for the design of a moral system for robots. The question then is whose morals get baked into our robopals and whose interests they're meant to protect.

Advertisement

"That sounds great on first glance, but you take three different humans and you'll get three different opinions on what is good for humans," said Juergen Schmidhuber, the scientific director of the Swiss AI Lab IDSIA. "If humans can't agree on what is good for them, how can we come up with an agenda for what the robots should do and what they shouldn't?"

Perhaps, this is where personalization comes in. In the movie Interstellar, the astronauts were able to dial back the level of honesty their robot exhibited. Maybe we'll be able to do the same to our silicon-based companions. Or perhaps robots will learn from interacting with us—and they'll be as shifty as we teach them to be.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.