Getting computers to understand illusions could make them less stupid
LatestAs you stare at the circles above, you might notice that they suddenly start rotating. They’re not. This is an optical illusion, made possible by a trick played on your brain thanks to the way it processes color gradients. Illusions are about subtlety; change the color gradient, for instance, and the illusion vanishes. Likewise, as people age, the illusion disappears thanks to yet unknown changes in how we perceive motion and color.
To a computer, the “rotating snakes” look static. Machines don’t really get illusions because the way they see, at least right now, is kind of black and white. Either it detects an object or it doesn’t. In other words, there isn’t much room for perception, the ability to interpret reality. And that makes them easier to fool. Yes, for all the talk of superintelligent robots, getting them to believe our lies is still a piece of cake.
Take dong detection. A few years ago, digital dicks started popping up in LEGO Universe, an online game what was supposed to be kid friendly. The company tried to come up with an automated penis popper, but their efforts were unsuccessful because people found ways to trick machines. They chopped their pixel-perfect phalluses into different parts. Machines couldn’t crack the visual penis puzzles. To them, there was no penis in sight. But humans knew better.
Likewise, “a terrorist could wear a mask of transparent, plastic film…[to] trick a facial recognition system into seeing an authorized security agent instead of recognizing a known terrorist,” Jeff Clune, an artificial intelligence researcher at the University of Wyoming, told the tech publication Communications of the ACM. “Any time I could get a computer to believe an image is one thing and it’s something else, there are opportunities to exploit that to someone’s own gain.”
That could have significant ramifications for privacy and security, so patching up the blindspots in our AIs is an active area of research. Some scientists say that a better understanding about how our own visual system evolved to perceive the world might unearth some answers. Enter the science of illusion. Here’s a description from McGill University’s The Brain From Top to Bottom website:
To perceive is to create a figure or shape that does not necessarily appear as such in the real world but that we can represent mentally so that we can recognize it under various conditions (for instance, when it is partly hidden). Hence, by studying the way that the brain fills in missing or ambiguous visual information, we can learn a lot about the way that we perceive the world. Optical illusions provide fertile ground for such study, because they involve ambiguous images that force the brain to make decisions that tell us about how we perceive things.
As seen from the LEGO dong detection problem, computers aren’t good at filling in missing information. They also tend to misidentify images if the object they’re trained to recognize is hidden from view, as in the mask example.
One group at MIT is trying to come up with a ways to translate what we imagine into code computers can understand. In one nifty experiment, for instance, they took thousands of white-noise images and asked people to tell them if they saw anything. Most of the time people’s response was ¯_(ツ)_/¯. But every once in a while, they’d say it was a car or a rose. If that happened multiple times, they could then set all these images apart, and using math, figure out what it was that made these things car- or rose-like.
“Although all image patches…are just noise, when we show thousands of them to online workers and ask them to find ones that look like cars, suddenly a car emerges in the average,” Carl Vondrick of Massachusetts Institute of Technology in Cambridge, who carried out the research, wrote in their paper. All this, they say, gives them clues as to how the human brain creates templates for different objects.
Then, of course, there’s DeepDream, Google’s hallucinogenic image generator, and EyeScream, the Facebook version. At their core, these programs are ways for researchers to figure out how computers make mistakes. By getting them to generate images, engineers could learn if their algorithms were learning the features that make a dumbbell a dumbbell and an ant an ant. If not, they could tweak their software accordingly.
The ultimate goal with all this is to get robots and AIs that can actually help us do the things we need them to do. If we task our robomaids to make us coffee, we don’t want them to get confused by a partially hidden coffee mug or French press. They need to be able to fill in the details they can’t readily see. Right now, AIs and robots can’t do that.
Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.