The thinking computer that was supposed to colonize space

This image was removed due to legal reasons.

It was supposed to read and write, walk and talk, and understand the world. It was supposed to think deep thoughts and learn from its mistakes. Hell, it was even supposed to reproduce.

But it wasn't human. It wasn't even living. Instead, it was one of the first modern attempts at building a thinking, conscious machine. It was called the Perceptron, and it was the brainchild of Frank Rosenblatt, a cognitive-systems scientist at the Cornell Aeronautical Laboratory.

The project was more than an academic curiosity. It was bankrolled by the U.S. Navy. At an event in July 1958, Rosenblatt unveiled the first Perceptron prototype, which ran on a five-ton, room-sized IBM mainframe, and gave the world a glimpse of the future. He showed the audience that his Perceptron could differentiate between left and right after "reading" through about 50 punched cards.

This image was removed due to legal reasons.

But this was only the beginning. Perceptrons would eventually be able to think like humans. They'd help us to fend off our enemies and to streamline our lives here on Earth. And one day, they "might be fired to the planets as mechanical space explorers," Rosenblatt told the New York Times in 1958.

Of course, that never quite panned out. Rosenblatt's Perceptrons never really made out of the lab, much less into outer space.

Not long after he first described the Perceptron in a 1958 paper, other prominent computer scientists started mocking him and his grandiose predictions. Later, in 1969, for instance, MIT's Marvin Minsky and Seymour Papert co-authored a book detailing everything that was wrong with perceptrons. The book proved too powerful a take-down to overcome, and perceptrons — and their underlying algorithms — were largely forgotten for decades to come.


Grant it, some of the criticism was well deserved. In the '50s, perceptrons couldn't really do much. Back then, computers weren't nearly as sophisticated as they needed to be to do the talking, seeing and exploring Rosenblatt spoke of. Even if they'd been more powerful, scientists just didn't have the data they needed to teach computers about the world we live in.

Rosenblatt's Perceptron is now but a relic stashed away at the Smithsonian. But the inspiration behind it is very much alive today. Rosenblatt thought that the best way to build a computer with human-level intelligence was to look to the brain. Today's most influential artificial-intelligence researchers are taking the same approach.


"As far as I know we haven't found AI yet, so any inspiration we're getting from biology is worth taking in and…see[ing] if there's some computational or mathematical principles that we can use," said Yoshua Bengio, an AI researcher at the University of Montreal, during a December lecture at the 2014 Neural Information Processing Systems Foundation conference in Montreal.

Right now, the dominant algorithms used by Bengio and other AI experts at Google, Facebook, Microsoft and Baidu are called artificial neural networks. Basically, they're a much more evolved version of Rosenblatt's embryonic "thinking machine." Neural networks are layered software constructs that are loosely modeled after the columns of neurons in the cortex, the region of the brain that handles tasks like speech and vision.


Rosenblatt's perceptron consisted of three layers, for instance. The first, which was modeled after the retina, consisted of 400 light sensors hooked up to an intermediary layer of 512 triggers. (Each sensor was paired with as many as 40 of these switches.) The second layer had to integrate messages the "retina's" faux neurons sent along. If they crossed a certain threshold, they'd fire off a message to the third layer, which would make sense of all the incoming information from layer two and compute an answer for the task at hand, say identifying a letter or which side of a card had a punched hole, left or right. If the computer didn't get the right answer, Rosenblatt figured out he could go back to the triggers and tweak their individual thresholds until the computer came up with the right answer. Modern neural nets are more complicated and advanced now, but this still pretty much how they work.

Rosenblatt and early perceptron-believers anticipated the impact these systems would have, though perhaps they were a bit too optimistic in how long it would take. "Later Perceptrons will be able to recognize people and call out their names and instantly translate speech in one language to speech or writing in another in another," that 1958 Times article reportedAt the time, they expected to have things working within a year.


Fast forward almost six decades, and more or less, all those things have actually come to pass — in large part because of neural networks. Late last year year, Microsoft unveiled Skype Translator, a tool that could that could translate English into Spanish, and vice versa, in real-time — enabling conversations between people who don't speak the same language.

Speech-to-text transcription and virtual assistants like Siri sort of work. Facebook and Google can recognize and tag objects and faces in photos we upload. Intel previewed True Key, an app that uses your face, instead of a string of numbers and letters, as your password.


And just yesterday, the robots of tomorrow got some more good news. Google DeepMind — the search giant's secretive in-house AI think tank — published a paper in the journal Nature proclaiming that they'd developed a new type of neural-net powered AI that could play roughly 50 Atari games as well as professional gamers. It's "the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks," the researchers wrote.

That's important because the technology baked into DeepMind's Atari master could be used to teach self-driving cars or robots in factories how to navigate their environments more safely and efficiently. (For now, it's unclear — though likely — that Google is test driving this with their herd of robots.)


"Most factory robots don’t do sensing at all. They carry out rote procedures that have been programmed into them," Stuart Russell, an AI expert at the University of California, Berkeley, told me a few weeks back. With the types of techniques DeepMind is using, "robots are opening their eyes."

Rosenblatt would have been proud. But he'd also warn that it might take longer to get there than we anticipated — an important point to remember before we panic about killer robots chasing after us.


Futures Past is a weekly look at the technologies and science that imagined the future, correctly or not. If you’ve got a tip, email me at Brownie points if you’re from the future.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

Share This Story

Get our newsletter