Getty Images
Getty Images

On Thursday in Seoul, Google DeepMind's Go-playing artificial intelligence, AlphaGo, beat human Go champion Lee Sedol in a second straight game.


The match was the second of five which will run through Tuesday (you can watch them on DeepMind's YouTube channel), and in the worlds of both Go and AI, Sedol's loss is a big deal, and not simply because of the money at stake (if Sedol wins best of five he gets $1 million). AlphaGo already beat European Go champion Fan Hui last year, but Sedol is another cut above, having won 18 international titles.

Go, as The New York Times succinctly described it, is a game in which two players "compete to win more territory by placing black and white 'stones' on a board made up of 19 lines by 19 lines." It's also considered a barrier in machine intelligence, because, as the Times explains "play is more complex than chess, with a far greater possible sequence of moves, and requires superlative instincts and evaluation skills." Sedol himself said part of the oddness of playing against AlphaGo was that, as opposed to playing another human, he wasn't able to read it. He said that it felt like playing alone.


News of AlphaGo's first win touched off a medium-sized tempest of excitement, with DeepMind founder Demis Hassabis describing it as an "historic moment." It also led someone under the username 'fhe' to post this very interesting comment on Hacker News, excerpted below:

When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn[ed] the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.

For example, we're taught [to] consider connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.

These abstractions all made a lot of sense, and feels natural, and certainly helps game play — no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.

But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably [are]. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.

It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). And much to our surprise, it's a new way that's more powerful than ours.

The grain of salt to take with this is that a) the poster is an amateur Go player who studied AI in college and b) it's a comment on Hacker News. Nonetheless, it's an interesting observation, especially in light of the history of Human-AI gameplay, and particularly the history of computer chess.

In a 2010 article for the New York Review of Books, chess Grandmaster Gary Kasparov — who famously lost to IBM's Deep Blue computer in 1997 — talked about his hope for a return to what made computer chess "so attractive to many of the finest minds of the twentieth century." Specifically, Kasparov was hopeful for research leading to the development of "a program that played chess by thinking like a human, perhaps even by learning the game as a human does" as opposed to one that simply plays better, more efficient chess.


Part of this is a function of computing power. In an interview conducted by the Computer History Museum in 2005, Feng-hsiung Hsu, the team lead behind Deep Blue, characterized the machine as not purely the result of brute force processing power. He said that "at the end, when we actually tried to beat Kasparov, we realized something: that you really need to put the intelligence in [as well]. You need to put the chess knowledge in."

Nonetheless, while AlphaGo does have processing power, the idea that 'fhe' suggests is more interesting than a computer playing Go like a computer or a computer that plays Go like a human: a computer thinking in human-like patterns, but without the prejudices of human physical experience.


Then again, maybe it's just good at planning ahead. For now, we'll have to see if Sedol can pull out a victory in the next three matches.

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at

Share This Story

Get our newsletter