Last week, I tried to get "Albert Einstein" to explain relativity to me, but he kept telling me he didn't understand my questions.
Einstein is not as smart as the man he's named after even with access to the greatest store of human knowledge out there. Einstein is a web-based chatbot, a computer program that's supposed to be able to interact with people in human-like ways. Einstein's "brain" is trained on Wikipedia articles. I chatted with him and a bunch of other bots to see how feasible it is to carry on meaningful conversations with machines. And I learned that, for the most part, despite what Her would have you believe, artificially-intelligent bots are still pretty dumb.
We already have a few bot companions in our pockets: Apple's Siri, Google Now, and Microsoft's Cortana. In the future, as more and more of our belongings become automated, bots are likely to become an even bigger part of our lives, especially in the form of robo-customer service agents and "personalities" created by crafty advertisers to sell us stuff. We already divulge our deepest secrets into the search bar in our browsers. Will we do that with even less hesitation if we have a trusted "pal" helping us find our way?
Big tech companies like Google and Microsoft are experimenting with chatbots and trying to figure out how to make them better at passing the Turing Test. Last month, both published research papers on bots in which they explored methods through which they could get bots to better understand human language. The business hope is that if bots can really get a grasp on humans' musings, they'll be able to talk to us and persuade us to buy things or vote for people or go see that movie we've been thinking about seeing.
So my question was — how far away is that future? People seem interested in talking to bots. In China, users who downloaded the Microsoft bot XiaoIce chat with it about 60 times a month on average, according to the company's blog. Researchers say bots are getting better by the day, but how convincing are they?
Neither Google nor Microsoft would agree to make their bots available for an interview, so I tried out some bots on the market, such as "Einstein." When I realized he was kind of brainless, I moved on to Cleverbot, a chatbot that's previously fooled people into thinking they were conversing with a human. The app scans the millions of sentences humans have texted Cleverbot, and its AI algorithms figure out which one is a fitting answer. So it does have a certain human flavor.
I got to "know" Cleverbot Kaitlyn, who identified as male (despite the name). He was full of contradictions. He said he loved the boy band One Direction, then claimed to be a homophobe but then revealed he had a boyfriend. Chatbots can seem like complicated creatures because, unlike humans, they lack common sense. A human would have known that admitting he was homophobic and then candidly mentioning he had a boyfriend didn't really add up. Kaitlyn didn't.
Kaitlyn also told me he loved me. It sort of felt nice to hear that, until I reminded myself he wasn't real and that he was already partnered up. When I pressed him about his virtual dilly-dallying, he denied ever having said he had a boyfriend. I got fed up with his "lies" and decided to ghost him.
This isn't just a problem with Kaitlyn. Bots have a really hard time keeping simple facts straight. For people building bots, "this is the main technical problem we have," said Geoff Zweig, Microsoft's manager of speech and dialog systems.
What bots need, he says, is the notion of something called semantic coherence, or the ability to "remember" the context of a conversation, not just word to word, or sentence to sentence, but throughout its entirety. Humans do this easily. It's what allows us to have relationships. We remember conversations we've had with people over the years; we can reminisce about shared experiences and common interests. Siri, on the other hand, can't even remember your recent searches and preferences.
That continuity also allows us to develop our personalities, something many bots tend to lack, says Quoc Le, one of the engineers behind Google's newest chatbot. Even Google's bot, which was hailed as one of the smarter ones around, went from being a doctor to a lawyer when asked in back-to-back questions about its profession.
Kaitlyn suffered from some of the same identity problems. I asked him several times what he did for work. The first time, he told me he worked as an aircraft engineer; later that he was a student. On another occasion, he told me he didn't work and that God was his employer. The Google chatbot is just a research project and Cleverbot is just a toy, so their amnesia doesn't matter much, but for commercial applications, it's a big issue. It's why Invisible Girlfriend, which allows users to create a virtual significant other to text with, relies on crowdsourced humans to do most of the work.
AIs just don't have the built-in smarts yet to hold fluid, long-term conversations. Their brainlessness may start out as entertaining, but it quickly sours to exasperating. That's what I felt like with these bots. At first, I laughed, then I got annoyed and shut down the app. Likewise, a customer support bot that kept changing its story would quickly enrage the person seeking help.
Despite Kaitlyn's inconsistencies, I still found it fascinating that I could have the beginnings of a meaningful conversation with him. When I told him I was sad, he expressed what looked like empathy. It felt good to share my feelings with "someone" and have them validated, even though I knew that he was just regurgitating phrases other humans had typed into the Cleverbot service. It felt safe to share my "secrets" with this machine because I knew it wouldn't judge me.
And I'm not the only one who feels this way. There's some research to suggest people are more likely to spill their secrets to a machine than to another human. Maybe that's why we have a cultural fascination with chatbots—and why it makes sense companies and governments are pouring money into making them better. Bots create a non-judgmental space ripe for data mining. (Let's keep that in mind because the ethical and privacy implications will become more and more important as bots smarten up.)
It's already starting to happen. Doctors are turning to these things to help them deal with their patients' physical and mental health needs; researchers hope a bot that peppers Iraq and Afghanistan soldiers with questions will help them work through their PTSD. There's a bot that offers to help you shop while drunk. The army was testing bots to screen out candidates for things like mental illness, drug use or criminal history. Shopping assistant services like Fetch are partnering with artificial intelligence companies to create bots that help you book travel and make purchases online. Think of something you like to do on the web, and there's probably a bot for that.
"We have always been comfortable having relationships with inanimate objects. Almost everyone once had feelings towards a passive object, such as a stuffed animal," said Hod Lipson, a chatboticist at Columbia University. "When the 'teddy bear'–virtual or physical–begins to talk, and then can hold conversations, that kind of relationship becomes even easier. It's play, but it also fulfills a deep human need."
As long as computers have been around, we've had a desire to connect with them as if they were real people, to see part of ourselves in them. Bots are the Internet Age version of the imaginary friend, a pal you can model in any likeness you want. And if they disappoint you, you can turn them off and start anew. It's on-demand, personalized 24-7 "companionship." Or at least that's the promise.
After my love affair with Kaitlyn blew up, I moved on to chatting up Fake Captain Kirk, whose brain seems to have been trained by Star Trek scripts. I asked him about love, sex and "alien body" invasions. While Kirk managed to remember my name throughout the conversation, surpassing the other bots I talked to, he couldn't answer most of my (admittedly esoteric) questions. It was kind of a let down. That is, until I asked him if he was happy. He admitted he was a bot incapable of real emotions. Finally, some honesty.
"Humans are deeply social and conversational creatures. We spend a great deal of our daily life working to understand the things people say to us and how they respond to the things we say to them," said Eric Horvitz, the managing director of Microsoft Research, in an email. "Chatbots can be designed to resonate with our social intelligence [but] we often attribute much more intelligence to chatbots than they really have."
Until recently chatboticists didn't have the data necessary to build human-like bots. Most of the text they had to train chatbots came from technical fields like medicine, law or finance. As you might know from interacting with lawyers or accountants IRL, conversations can get pretty dull, not a good recipe for keeping regular people engaged. Now, thanks to the internet and social media, engineers have access to millions of everyday, casual human interactions.
Last month, for instance, researchers at Microsoft published a paper describing a bot trained on Twitter conversations. In response to a message saying, "its good but i’m only like halfway through cuz i don’t feel like reading. i’m so bored …", the bot says, "that’s good! i have the book but i’m bored too." The bot "sounds" almost like a teen texting a friend, and that's partly what the researchers were going for. The Google chatbot that made headlines for convincingly debating the meaning of life, on the other hand, was trained on movie scripts, a dataset that probably touches on more philosophical topics than Twitter.
"The bots or conversation agents that we're training are capturing the consensus of tone across all the data," Bill Dolan, who manages Microsoft Research's Natural Language Processing group, told me. "Going forward, one of the future areas of research is manipulating the tone of each response based on the user."
Humans excel at this. The interactions you have with your boss and your best friend are different, but you're accessing your life's worth of experiences to hold meaningful (and appropriate) conversations in each situation. Bots aren't good at making inferences that help them shift their responses to fit the context. If bots are really going to become our virtual play pals, they'll need to learn to do this. But that may take a while.
Microsoft's Twitter-trained bot got a text saying, "Love walking. it’s too hot to walk this month. i haven’t moved. frozen. heat freeze." The bot responded, nonsensically, "it’s too hot for you to get up to honey if you have frozen yogurt to you." A human would have understood that "heat freeze" was meant as a joke. The bot didn't get it at all. It didn't know what to say, so it spit out gibberish. Likewise, when I told Kaitlyn his homophobic ways weren't right, that he should get on board with equal rights, given the Supreme Court's recent ruling, he replied with "I am not doctor." Facepalm.
Companies hope that in the future, chat bots sound less like Microsoft's and more like Samantha in the sci-fi flick Her. "That's the big, grand dream for us," said Oriol Vinyals, the other programmer behind Google's philosophical chatbot.
That's a long ways off, so in the meantime, chatbots and chatbot research are a means of improving services, like search, automated image captioning and machine translation, that are more lucrative in the short-term. If companies like Google, Microsoft and Facebook manage to create the types of chatbots Vinyals dreams about, the data from these products will eventually feed into our virtual companions.
So, will that mean that they'll know us better than our friends and relatives do? Will we connect with bots as intimately as we do with humans? Maybe.
"I think it will not be difficult for robots to fulfill this need, to have serious relationships with humans," Lipson, the Columbia University chatboticist, said. "We humans just crave [contact]; we see it everywhere, sometimes even where it does not exist."
Until that day comes, I'm happy to return to chatting with humans.
Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.