How computers will think

Latest

Google and Facebook have an almost perfect log of your comings and goings and they can combine that information with artificial intelligence to predict things about you. The systems are big and sophisticated, but their capabilities are still a far cry from the friendly, charismatic Samantha in the movie Her or the devilish, horrifying HAL in 2001: A Space Odyssey. (That’s probably a good thing. After all, no one wants a HAL taking us out.)

But algorithms are slowly getting smarter. Computer scientists are programming systems that can teach themselves to play Atari games and poker, all on their own. Just last month, some researchers from Germany claimed they’d developed a somewhat sentient Mario, which was capable of responding to voice commands, learning from his environment, and playing his own Nintendo game without human input.

The more autonomous these strange little agents get, the more pressing this question becomes: What will it mean for a computer to be conscious?

“That’s the question that in our generation… we’ll have to answer, as we’re surrounded by ‘creatures’ that rival us in intelligence,” said Christof Koch, the chief scientific officer at the Allen Institute for Brain Science during a December lecture at the 2014 Neural Information Processing Systems Foundation conference in Montreal.

Even if you’re not a scientist or coder, the future of AI is going to matter to you.

In the near future, everything we do – from falling in love to shopping for groceries to traveling across town – will be done with the aid of machine learning and semi-intelligent, semi-autonomous computer systems. So, that you’ll be prepared when the robot revolution arrives at your door, we’ve compiled a working primer about how computers will think, whether we’ll be able to tell they’re doing it, and what it all means for the future of robot-human relationships.

Will we know a conscious machine when we meet one?

When you’re having a conversation with a friend, you infer that she is conscious based on her behavior—and how you’ve behaved in similar situations. If a friend tells you she’s feeling down, you can infer what’s happening inside her head because you’ve felt sadness or frustration yourself. You can read her body language, remember your own troubles, and empathize. You project consciousness on her because you’re conscious yourself. We do this with animals, too. If a dog is smiling, we conclude it must be happy.

But these predictions start to break down with brains that are less similar to ours — say, that of a worm. According to Koch, that’s because worms have different and much simpler combinations of neurons, which he calls the  “atoms” of behavior, perception, and consciousness in biological systems.

For biologists studying living things, the big question is how many neurons — or interactions between neurons — it takes to build consciousness that approximates our own. We know it’s a huge number, but we don’t know the precise answer.

With computers, though, the problem gets a bit trickier, because the “biology” of a computer is completely different from that of a human. Computers are composed of silicon-based transistors, tiny switches that turn on and off to store information. Will that difference mean that consciousness in a computer will look and feel drastically different? Will we be able to recognize it? What’s the best way to test for consciousness in a digital system?

To get at some of these issues, Giulio Tononi at the University of Wisconsin-Madison has been devising a mathematical theory of consciousness, dubbed the “Integrated Information Theory,” that can apply to any complex system — including the human brain and the Internet. Get enough information connected together in certain mathematically specific ways and consciousness arises.

The verdict is still out about whether this theory fully explains how consciousness arises.

What rights and responsibilities will conscious machines have?

One theory is that consciousness is defined by a thing’s relationship to other things. “Consciousness is social…it’s about being in a social group,” says Phil Maguire, a computer scientist at National University of Ireland in Maynooth, who recently published a study on machine consciousness.

This definition of consciousness would seemingly exclude one-off systems like a lone computer, which isn’t part of any social group. But as machines become interconnected, they could begin to take on some of the characteristics of conscious beings. Take Google’s robotic cars or drones. For these and other autonomous robots to really work, they’ll have to navigate roads and maneuver around people and each other.

That interplay could signal the earliest beginnings of consciousness. When a system “has to predict the behavior of the things around it…that’s where you get consciousness coming in,” says Maguire. “Maybe in that case, we will treat artificial, complex systems as being conscious.”

Recent developments in AI and robotics do suggest that computers are getting better at predicting actions. So, as the line between human and computer-generated “consciousness” becomes blurred, we’ll face a whole slew of ethical and legal questions about the essence of personhood, such as who’s to blame if a computer makes a mistake — such as a self-driving car getting into an accident — or does something illegal, as in the case of the Random Darknet Shopper, a bot that bought drugs on its own on the internet.

“How do you handle liability? Who do we hold responsible?” asks University of Washington law professor Ryan Calo, who specializes in cyber law and robotics. “We need to use the law to create the proper incentives.”

Perhaps, Maguire posits, there’ll come a time when the computers themselves — and not their manufacturers — will be liable for their misdeeds, and subject to some kind of penalty system.

Will thinking computers emulate our brains?

Another theory is that consciousness is defined by how we link pieces of information together. Our brains are a complex network of neurons working together to store information about movement, emotion, the environment, the things we read and the interactions we have with people. Our memories aren’t stored in any one place. Instead, they seem to be distributed across the brain’s neural networks. When you recall foods you ate as a kid, for instance, several regions of the brain light up with activity. Out of all those neuronal bursts, consciousness somehow arises. How exactly isn’t clear.

If we want computers to be able to replicate our consciousness, they’ll need to have the same — or similar — types of links.

In conscious systems, information “becomes part of your essence. You’re not just storing your memory separate from everything else that you know,” says Maguire. “You’re completely binding it together, and it’s this binding [that] we mean when we use the word consciousness.”

By that definition, today’s computers are only just beginning to approach something that looks like consciousness.

Most systems today are only really good at one thing, say vision or speech. Engineers are starting to bind these capabilities together into single systems, most of which exist only in research settings. At the Microsoft Research campus, for instance, robots and virtual assistants help visitors to find their way and staffers to keep track of appointments. These systems seem to understand both written and spoken language pretty well, and they can recognize people. They can predict the schedules and whereabouts of their bosses.

But if you asked them if they understood what they were doing on a philosophical level, would they know? Could they hold a conversation about the symbolism in Gabriel Garcia Marquez’ One Hundred Years of Solitude and how you relate to that on an emotional level? Probably not. That is to say, their intelligence is still rudimentary. Would you call that consciousness? I wouldn’t.

Plus, compared to a human brain, these things are still wildly inefficient. Humans learn quickly from a few examples, but computers need to ingest massive amounts of data — just to master one type of task. Engineers at companies like Google and Facebook have figured out how to hook up thousands of computers into massively parallel networks that can process information more speedily. This has helped them create systems that can do seemingly intelligent tasks, like classify images and answer our voice commands. But speed only gets you so far. Your brain is slower, but somehow manages to do incredibly complex operations.

So, another approach to build better intelligence is developing more brain-like computers that encode information in ways that are more similar to the way brains do it. Earlier this year, for example, IBM unveiled a so-called neuromorphic chip that could store information as a series of pulses, which is just one of the ways neuroscientists believe that neurons in the brain encode information.

But how these new chips will translate into smarter computers is still up for debate, and Maguire doesn’t think these systems will ever be fully conscious either. No matter how speedy or brain-like they get, they’ll still be slave to the same — or similar — rules of computing as current systems.

Plus, computer chips will never have one identifying trait of human consciousness: mystery. Humans will know how “to disintegrate it, how to break it up into its components…There’s no mystery about it, whereas there’s a mystery about other people’s behavior,” Mead said.

Is the Singularity of smarter-than-human machines near?

Tononi, the University of Wisconsin-Madison professor who created the Integrated Information Theory, believes that there’s another crucial part of consciousness: a conscious system must have a feedback loop that allows it to learn from experience.

The AI-powered Mario is just one example. After learning a few rules, it could navigate its environment, understand language and associate certain strings of actions with results. (Like, stomping a Goomba results in the shroom’s demise.) Then, there are neural networks, one of several types of AI algorithms that can learn from experience.

Neural networks are layered software constructs that are loosely modeled after the columns of neurons in the cortex, the region of the brain that handles tasks like speech and vision. Neural networks are powerful because they can make inferences about the world after crunching through massive amounts of data. The more data they see, the better they get. Remember the famous Google computer that taught itself how to identify cats on YouTube? That was a neural network.

But, as it stands, you wouldn’t call these systems conscious. For starters, they’re not yet very versatile. A neural network trained on images won’t automatically be great at interpreting sounds, and vice-versa.

To begin to approach the dumbest human brain, AI systems would need to work simultaneously on many different symbol systems, as part of a social group, linking information together in a seamless, neuromorphic way, and learning from experience in the way that humans do. Not even the best systems can do all these things, even at a rudimentary level.

So, no, the Singularity is not near.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin