Robots may be getting less clumsy, thanks to the science of illusion

Latest

Robots are clumsy. They fall over, bump into walls, and generally lack the spatial awareness of humans. Earlier this year, I took a ride in Mercedes-Benz’s concept self-driving car, and even that could only drive itself in a loop because engineers had preprogrammed it to do so – the car didn’t really know where it was going, just that it had to turn a certain number of degrees once certain sensors were activated.

If robots are ever going to live comfortably and safely among humans, they’ll have to figure out how to move like us. As with other fields of artificial intelligence, engineers are looking to the human brain to find some answers to this problem. And some recent findings, published in the journal Current Biology, point to one possible way to make robot movement more humanlike.

The experiment, conducted by a team of researchers led by Arvid Guterstam, a 28-year-old medical doctor-turned-scientist based at the Karolinska Institute in Stockholm, Sweden, used an MRI machine and a dummy to test how the human brain responds to changes in its spatial perception.

In January 2013, Guterfound laid down inside an functional MRI machine, with a pair of custom-made virtual reality goggles strapped to his face. Usually, these headsets and MRI machines don’t mix. The magnet inside would yank them from a person’s face, but these were wrapped in special material so that scientists could see what Guterstam’s brain was doing when they played tricks on his senses. In the right corner of the room was a dummy, who was also lying down.

The researchers turned on the goggles. Through the VR headset, Guterstam could see the room through the dummy’s “eyes.” (He could also see his own body in the background.) Then, the 2.5-hour experiment began. During some trials, one tech would stroke his belly with a stick. Moments later another would do the same to the dummy. During others, both caresses happened simultaneously. And that’s when things got interesting because, Guterstam says, he felt like he was experiencing the entire event from outside the MRI machine — that is, from where the dummy was laying. All the while, the MRI was recording how Guterstam’s brain was processing what was happening.

“It’s an eerie sensation,” he recalls. “You feel touches on another person’s body you intellectually know that’s not your own.”

This was a test run for a set of experiments Guterstam and his colleagues would do over the next two years looking at how the brain creates self-perception — that feeling of owning our body and being physically located somewhere in space. After Gusterstam’s trial run, the researchers had 15 healthy people come into the Karolinska lab to go through a set of similar tests. (The data from Guterstam’s trial run wasn’t included in the final paper.)

During the tests, researchers placed a dummy in one of three positions, as shown in the graphic below. When they simultaneously stroked the dummy and the person inside the scanner, the human said they perceived themselves to be where the dummy was, much like Guterstam had. The illusion, says Guterstam, gave them the sense they’d “teleported” out of the scanner. When they “threatened” the dummy with a knife, the person reacted as if they’d been threatened themselves: their skin conductance response increased, a physiological sign of stress.

All the while, they were in a functional MRI machine so the researchers could record, in real-time, how the people’s brains were reacting to these illusions.

“[Spatial awareness] is a very fundamental feeling which we usually take for granted,” he told me. “But it’s a very complicated task to continuously compute the location of our limbs and body in relation to the external environment.” It’s also vital. Without it, surviving would be difficult: we wouldn’t be able to get from one place to another, forage for food, or find our keys. We’d constantly get into accidents.

There’s already a growing body of scientific literature into the study of self-perception. Guterstam’s team had also observed some of these phenomena before in a previous set of experiments. And last week, they published another VR study that tricked people into thinking they were invisible. But without the fMRI data, they couldn’t really make any conclusions about what brain areas might be integrating seemingly conflicting visual and tactile stimuli.

When they tricked participants into thinking the dummy’s body was their own, they found increased activity in the bilateral lateral occipital cortex, a brain region involved in creating a sense of body ownership as well as the visual processing of body parts. They also found that the hippocampus and other parts of the brain involved in spatial understanding were involved in decoding a person’s perceived location.

For instance, in one experiment, the researchers compared whether the person thought they were located at the left or right corners of the room and looked at how much these location-honing regions were activated. When the person “felt” like they were outside the scanner, there was more activity in these areas than when they felt like they were inside the MRI machine. What’s more, they also found that the posterior cingulate cortex, was responsible for binding together the sense of body ownership and perceived location.

“It is quite interesting from a neurological perspective to diagnose specifically which parts of the brain combine to form a model of the self. They have accomplished that quite well,” Paul Rauwolf, a graduate student at the University of Bath who studies self-deception, said in an email interview. “They have shed light on which areas and perceptions in the brain combine to form an understanding of the self.”

That being said, he added, “it’s rather unsurprising that projecting one’s vision to another location creates errant illusions of the self.” After all, motion sickness, he explained, is a common example of what may be going on in this study. Playing video games and sailing sometimes give us motion sickness because they mess with how our body integrates what we see with how we perceive our body in our environment. When those things don’t align, “weird things happen. Our perceptual systems evolved under certain assumptions…when you break them (e.g. project your vision to a different corner of the room), fidelity in the model of the self is compromised.”

Guterstam and his team haven’t yet started any formal collaborations with robotics labs, but he says, it’s something they plan to do in the future. Having a better understanding of how the brain computes the self could help engineers develop robots with a better sense of location and improved navigation skills. That’ll be increasingly become important for self-driving cars and autonomous home and industrial robots.

Having a better grasp of how patterns of brain activity give rise to self-perception, Gusterstam says, could ultimately lead to robots with beefed up navigation, better neuroprosthetics and treatments for neurological disorders that mess with our sense of self. But he’s also quick to say that for now this is all basic research and that commercial and medical applications are still a long ways off.

Still, it’s an exciting development, not least because it could end up helping humans as well as robots. “This study offers significant insight into the variables and assumptions (e.g. we always look in front of our body) that evolution used in generating the experience of self-location,” Rauwolf said. “It offers an excellent springboard for robot design…Not only do I think this study has the capacity to improve robot navigation and self-location, in turn, robotic experimentation will help us further improve human navigation and self-location.”

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin