This is how 4 different computers responded to the inkblot test

Latest

Since the 1950s, people have looked to the Turing Test as a measure of machine intelligence. The tests asks whether a computer program can dupe one of three judges into thinking it’s human during a text conversation. If a machine succeeds, it’s considered to have human-level smarts.

But of late, some computer scientists have called into question whether this simple task adequately evaluates intelligence. After all, intelligence is multifaceted. It’s social, emotional, linguistic, musical and creative.

Last week, Fernanda Viégas and Martin Wattenberg, two Googlers who lead the search giant’s Big Picture data visualization group, put machines up to a different kind of exam. Instead of having machine conversationalists trick humans into believing they were human, Viégas and Wattenberg presented four commercially available machine vision systems with Rorschach blots, black and white images used by psychologists to assess people’s personality and emotional functioning.

“Couldn’t [machines] have feelings, personalities, even psychological idiosyncrasies? Could they have their own strange personalities, different from any human?” they wrote in a Medium post. The inkblots, they reasoned, might help us “understand the subconscious thoughts of the new mechanical brains” that inhabit our world.

The experiment revealed something quite interesting: each machine saw something different. For instance:

Robot 4 is definitely the smart aleck in the group. Here’s another run-through:

This was the scorecard for all the experiments the four bots went through:

I’m not sure the experiment says much about their personalities, but it’s clear that each of these AIs “sees” the world in a unique way. Robot 3, for instance, is great at abstractions. Robot 4 is very literal. And Robot 1 and Robot 2 see real-life objects in the ink splotches they’re asked to analyze.

I decided to replicate Viéga and Wattenberg’s experiment using a photo of a palm tree I found on Wikipedia. Robot 1, a.k.a. AI startup Metamind, totally missed the mark and thought it was seeing a bee or a quill pen. Robot 2, the Wolfram Language Image Identification Project, said it was beach. Clarifai, Robot 3, made 10 guesses, including beach, tropical, idyllic, summer, and sand. And finally Cloudsight, Robot 4, classified my image as a green coconut palm tree.

Each bot’s distinctive answers, I’m willing to bet, are the result of the data their creators used to teach them about the world.

If you stop and think about that for a minute, that’s really profound. Viégas and Wattenberg’s cheeky experiments (and my own quick attempt to verify them) show that at a very fundamental level computers aren’t all that different from humans. Our individual intelligence—and our worldviews—are dependent on our experience. And in computer-speak “experience” equals data.

As we look into a future where robots live, work and play alongside us, that will have significant consequences. Robots with access to the most and the best data will have a leg up over others with less or mediocre information at their robofingertips. That’s why companies that are building both physical and virtual bots are so protective of their data, and why they’re so intent on getting their paws on more and more of it. They know that the kinds of AI systems they want to create—and sell—live and die on data.

What’s interesting, too, is that the human interactions these AIs have access to will help make them more capable—just as is the case with humans. There’s evidence, for example, that shows that reading to kids early in life helps them develop better reading skills later. Humans have the chance to teach AIs. When Metamind identified my palm tree as a bee, it asked me to tell it if had made as mistake. I typed in “palm tree” as a label for the image, and realized none of their previous images had that label. Palm tree didn’t auto-populate in their image classifier. It’d never “seen” one. I taught it something, and hopefully that will make it smarter in the future.

That’s a very simple, trivial example, but fast-forward a couple of decades and we’ll be able to teach our roboassistants much more complex tasks.  Often when we think about machines, we think of them as cookie-cutter, but they’re not. And the more we interact with them, the less generic they’ll become.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin