It's hard work being funny—especially for robots

Latest

Vinith Misra is one of the funnier people in tech. As a consultant for the hit HBO show Silicon Valley, he’s best known for having crafted a mathematically complex dick joke. At IBM, where he works full-time on Watson, part of his job is to figure out how to give a robot a sense of humor.

AI “is not about replacing humans, but interacting with them,” Misra told me. “That’s where humor is super valuable.”

We’re going to be interacting with machines more and more, as robots and smart devices enter our homes, offices, cars, schools, hospitals and workplaces. If Misra and a host of other computational humorists are successful, those machines might be able to jest with us in human-like ways, as does TARS in the movie Interstellar.

After TARS jokes after lift-off that his companions will make great human slaves on his robot colony, our protagonist turns his humor setting down from 100% to 75%, hinting at the level of personalization that’ll be necessary to create computer humor we like.

“The larger goal is making AIs that come across to humans as more natural,” said Mark Riedl, an interactive computing expert at Georgia Institute of Technology who studies humor. “Humor can be used to put people at ease and create rapport. This will be crucial where we’re building societies surrounded by AIs.”

Humor is an integral part of our intelligence. In human communities, it serves as a ubiquitous social lubricant. Groups of friends quickly craft “inside jokes.” We quip to break the ice in uncomfortable situations.

“There don’t exist humorless societies….It’s a way to create camaraderie. It doesn’t matter what you say. The bar is very low,” said Ben Bergen, a cognitive scientist and linguist at the University of California, San Diego. “It suggests that we’re actively trying to laugh. We want to find people humorous. We want to engage.”

For machines to be fully integrated into human culture, they’ll need to be funny. There are signs that’s starting to happen. Last year, a Microsoft bot was able to pick out funny submissions to the New Yorker cartoon captioning contest. It was a significant feat, but it required a lot of human supervision and the machine’s definition of funny didn’t always match humans.’ Meanwhile, researchers in Japan and at Carnegie Mellon University are developing joke-telling robots that people seem to be amused by. Others have developed joke generators. But they are mostly primitive, their jokes are largely pre-written by human creators or focused on simple types of humor like puns.

For example, the CMU bot, dubbed Data, began a recent standup session with, “A doctor says to his patient, ‘I have bad news and worse news. The bad news is that you only have 24 hours to live.'” The patient wondered how the news could possible get worse. “I’ve been trying to contact you since yesterday,” he goes on. The audience starts laughing, but before they even finish clapping, the oblivious Data steamrolls into his next joke, this time about the Swiss. His standup chops aren’t quite perfect.


Human-level, 100% computer-generated humor is still years, if not decades, away. Several people I spoke with for this story said that humor is the most difficult problem in AI, a kind of final frontier. It depends on making fast-paced connections between seemingly dissimilar concepts on the fly, based on our memory of the world. Humor requires a deep understanding of language, emotional context, cultural references and social norms. It involves, as one person put it, all the intelligence of the human mind.

Crack humor and you end up with real, multi-faceted AI. The financial impact isn’t yet known, but it’s likely to be in the billions of dollars.

Computational humorists, most of whom have backgrounds in computer science or artificial intelligence, are out to replicate human humor (and intelligence) in machines. Some have worked as comedians or in entertainment. Others have collaborated with the advertising industry. Companies often hire humor consultants because humor can influence people’s purchasing decisions, says Carlo Strapparava, a computational humorist. Others see the benefits humor brings to human relationships and see computational-humor research as a way to gain insights into the human psyche and neurological diseases that affect how humans connect.

Much of the ongoing research has been focused on teaching computers what’s funny so they can detect humor, and that’s been “fairly successful,” says Julia Taylor, a computational humor expert at Purdue Polytechnic Institute. The Microsoft-New Yorker cartoon challenge is one example. Humor resulting from blatant failure is an area where robots have also excelled. Take recent bots that write their own TED talks or my own erotica-penning machine:

They’re both so bad, they’re funny. But that kind of incidental humor won’t be enough to put robots to work in fields like advertising, customer service or healthcare, where humor has been shown to increase engagement—and where AI experts see it making big impacts in the future.

So, researchers are also trying to dissect the anatomy of jokes and comedy to help robots come up with their own. Riedl, who heads up the Entertainment Intelligence Lab at Georgia Tech, spent almost a decade studying human improv actors. His goal was to learn as much as possible about the methods and tricks actors utilize to make audiences laugh and use that to program an improv “agent,” or bot, that could do the same.

“It’s been a very difficult and challenging problem,” he told me. “It’s real-time problem-solving”—and learning, which are still really difficult for computers. “They can can take an hour or a day to crunch the data,” he added. No one wants to wait that long for the punchline.

Robots have made some strides in generating more simple humor, through techniques like worldplay. HAHAcronym, developed by Strapparava, reworks popular acronyms. FBI (short for Federal Bureau of Investigation) becomes Fantastic Bureau of Intimidation. PDA (personal digital assistant or public display of affection) becomes Penitential Demoniacal Assistant.

As that last example shows, computer-generated jokes don’t always work. That’s to be expected. Human ones don’t always land either. The problem is that we’ve come to think of computers as superhuman, so it can be a much bigger bummer when they fail us.

“When a computer doesn’t understand that you are joking or doesn’t provide a good joke. It’s obvious,” Taylor told me. “That’s why it’s so difficult to deal with it.”

Paradoxically, it’s also hard to deal with their outsmarting us. (Human psychology is complicated!) When Taylor was building a computerized jester, it told her a joke she didn’t get. The machine tried to explain why the joke was funny. Its explanation made sense. But the harm was done. She doesn’t recall the exact joke, but thinks it might have been a knock-knock joke.

“It was right, from its point of view. It’s just not something I wanted to hear…I didn’t appreciate it,” she recalls. “I was lacking some information it had.”

It felt a little like being at a comedy show where everyone’s laughing and you don’t understand the joke, she said. The lesson she took away from it was that a big challenge for bot-generated humor is our unpredictable and easily bruised human egos.

Kim Binsted, a former comedian and computer scientist-turned-space scientist who spent the early part of her career studying computational humor and AI, had a similar experience. The pun generator she built years ago would often come up with puns many humans didn’t get. Once, it asked, “What do you call a pessimistic investor?” The answer was a grizzly bear. (When you’re bearish on financial markets, you’re pessimistic about the way stock will perform.) Few people knew that, so rather than amusing people, it had a negative effect. She had to hand-tune the thing so it would tell jokes that would land well with its audience.

These examples, though simple, get at what makes computational humor so difficult and fascinating. Sure, it’s great when Google or Yelp turn up restaurant options we were previously clueless about. We’ve culturally wrapped our heads around the fact that these discovery tools have access to much more information than any one human.

But “humans don’t like machines to be emotionally cleverer,” said Bergen, the UCSD linguist. And, humor isn’t just about facts. It’s emotional. It makes us feel something. And when computers tread away from the merely factual into the emotional and interpersonal, it can come off as creepy. We like the idea of befriending our devices—Hi, Siri!—but we don’t like the idea of being emotionally manipulated by them. And at the end of the day, humor is emotional manipulation.

“The hard AI challenge is that there are also sociological challenges,” Bergen added. “A machine might not, for a while, be the type of thing people are willing to accept humor from, even in the same circumstances as a human.”

On top of that, humor differs from culture to culture, from person to person. Humor is about understanding the individual you’re interacting with and molding your knowledge of the world to play on what you know about his or her experience.

For instance, over the holidays, my sister and I watched Master of None. We loved the scene in which Aziz Ansari’s character says, “The sickening! It’s happening!!” on the set of a movie he’s working on. When she started feeling a cold coming on, she’d make a reference to the show. The joke culminated with this emoji exchange:

We cackled gleefully. It only worked because we both knew the show. We had zeroed in on that being funny. We knew she wasn’t on the verge of fatal illness, so the joke was appropriate. Machines can’t take all that into consideration yet and come up with a joke that works.

“Personalization is really what we’re waiting for with humor. We can’t have a global model for humor,” said Misra, the IBM humorist. “We need to understand [individual] people better to make [good] jokes.”

In other words, to have computers that are funny and can amuse us, we have to be okay with companies knowing even more about us. Good humor generation and understanding requires pinning down emotional states as well as better natural language processing, the ability of computers to parse and comprehend written language.

That kind of understanding is in its infancy. IBM has a “personality insights” app that looks at texts and emails and gives you clues into what kind of person someone is. (Misra’s work on humor isn’t baked into this service, he says.) An app called Crystal does something very similar so that you can approach someone in the most effective way, like using emoji or telling jokes. It even tells you what kind of phrasing to avoid. You need to generate your own jokes, though.

Microsoft’s popular Chinese-language chatbot is better, in some respects. It “remembers” things about you and asks you how you are based on that information. Part of its appeal is that it tells jokes. These are pulled from the web, meaning they’ve been written by other humans.

But the sophisticated AI that powers Xiaoice’s humor can take into consideration the user’s social relationships, profile and interests when choosing which joke to “tell” from its internet grab-bag, according to Di Li, the Head of the Xiaoice Project and director of the Microsoft applications and services group in East Asia. The goal is not to have her come up with her own jokes, he said, but to make sure they fit a character.

“Xiaoice is really like someone who is your close friend,” he said in an email. “This helps engagement a lot.”

We like funny people. It makes sense we’re drawn to funny bots, too.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin