The New York Times recently ran a fascinating article about millions of Chinese people seeking friendship from a bot. XiaoIce, an artificially intelligent, Mandarin-language chat bot created by Microsoft, has been downloaded by 20 million people, and in a blog post last year, Microsoft said that the average user talks to her more than 60 times a month. That's more often than I talk to some of my closest friends.
The NYT compared XiaoIce to the sentient Samantha in Her, writing, "Microsoft has been able to give Xiaoice a more compelling personality and sense of 'intelligence' by systematically mining the Chinese Internet for human conversations." If you've seen Her, you know that the conversations between Samantha and her geeky human owner got pretty intimate. The NYT implies that XiaoIce is having similar conversations with her many friends, and that Microsoft has "strict guidelines" to protect the privacy of those conversations:
Because Xiaoice collects vast amounts of intimate details on individuals, the program inevitably raises questions about users’ privacy. But Microsoft says it enforces strict guidelines so that nothing is stored long term.
“We don’t keep track of user conversations with Xiaoice,” Mr. Yao said. “We need to know the question, so we store it, but then we delete it. We don’t keep any of the data. We have a company policy to delete the user data.”
That sounds like a good, privacy protective approach! But another part of the article seems to contradict it:
The program remembers details from previous exchanges with users, such as a breakup with a girlfriend or boyfriend, and asks in later conversations how the user is feeling.
So, does it remember or does it forget?
The "terms and services" link on XiaoIce's page does lead to a Chinese-only description. Fusion's Isabelle Niu translated the first of five terms, which deals with privacy:
1. The second generation of XiaoIce has a strict privacy protection policy. Because the AI needs to provide responses, the dialogues and information may be transmitted to the Microsoft 2nd Gen XiaoIce Server in the process.
Out of those, dialogues from third parties or other information that do not involve responses from XiaoIce will be deleted thoroughly and immediately.
For dialogues and information that do involve XiaoIce’s responses, we will automatically remove all private or sensitive information and only keep some statistical data. Please do not worry.
If she can remember a break-up, that seems like more than "statistical data" being preserved. A human translation of that legalese would suggest Microsoft keeps XiaoIce's side of the conversation, and removes people's names if they come up. The NYT wrote that XiaoIce also "would keep certain general information, such as a user’s mood."
It's significant because research suggests that robots would be an incredibly effective way to compile dossiers on people. As all of us Google-users know, we're more likely to share embarrassing things with what we think is an "artificial intelligence" than we are with a human being. Research bears this out.
Engineer and artist Alexander Reben found that when he sent a cute cardboard robot out in parks, fairs, and other public places, people would tell "Boxie" "very personal stories and things that you would not normally tell a stranger." In a BBC article, Reben included some conversations his cute little robot had with strangers as evidence.
BlabDroid: “What is the worst thing you have ever done to someone?”
Person 1: “Not telling my dad I loved him before he died.”
Person 2: "The worst thing I ever did was, um, made it so that my mother had to drown some kittens one time and I didn't realise until after that was over that it was a very difficult thing for her to do and I've never… I've never forgiven myself for making her drown some little kittens, but we couldn't keep them and I should have come up with some other way."
Reben wrote that it "raises some difficult ethical questions as we build robots that are smarter and better able to converse with us:"
How personal and 'real' do we want these robots to be? At what point does a robot designed to trigger our emotions become manipulative?"
XiaoIce is not the only tech companionship service out there. There are other less-popular chatbots, therapeutic bots that help vets with PTSD, and even human-powered chat services that masquerade as bots. These could become a way not just to provide companionship to people, but to elicit information from them for other purposes. As my colleague Daniela Hernandez put it after spending hours chatting with bots, "Bots create a non-judgmental space ripe for data mining." Companies could program bots to collect intel relevant to better targeting people with products. ("Where's the one place in the world you really want to go?" "What's your biggest insecurity?") Law enforcement could program bots to ask criminal suspects for evidence. ("What did you get up to last night?" "Have you ever broken a law?")
Bots and robots may seem like they're designed to serve us, but as they become a bigger part of our lives, we should keep in mind that their real masters are the people creating and programming them. And we should know what's being done with the data we hand over to them, and how long it could stick around to haunt us.