The Games Computers Play

My predictions regarding computer Go, the oriental game played on a 19x19 board, have been fewer and not as successful as those in chess. I correctly predicted that no Go program would win a match against a professional player by 1994, and since then I've been saying to anyone who would listen that that it would take until 2035 or thereabouts to reach World Champion level in Go. Why so long?

I've been involved for almost all my adult life in the branch of Artificial Intelligence devoted to game playing: computer chess, computer bridge, Scrabble, backgammon, poker, . . . And along the way I've stuck my neck out and made a few public predictions, most of which have turned out to be spot on. The first of these was in 1968 when I made a £500 bet with John McCarthy and Donald Michie, both A.I. biggies, that I wouldn't lose a chess match against a computer within 10 years. At the time I was Scottish Champion and felt very confident wagering more than half a year's salary.

I duly won that bet, and another one (with computer guru Dan McCracken) for a further 5 years. Not long afterwards I published an article entitled "When will brute force programs beat Kasparov?", in which I predicted 1997 as the first year when a chess program would win a match against a reigning world champion. Kasparov obliged me in 1997 by crashing to defeat against IBM's Deep Blue in New York.

My predictions regarding computer Go, the oriental game played on a 19x19 board, have been fewer and not as successful as those in chess. I correctly predicted that no Go program would win a match against a professional player by 1994, and since then I've been saying to anyone who would listen that that it would take until 2035 or thereabouts to reach World Champion level in Go. Why so long?

One of the great difficulties in programming Go is the sheer number of possible moves from each position: 361 at the start of a Go game but only 20 in chess. So in Go the "tree" of possible variations, looking any particular number of moves ahead, is vastly in excess of the corresponding number for chess. But there is another big problem facing Go programmers, a problem which does not exist in chess or any other strategy game with which I'm familiar.

In Go it often seems impossible for a very strong player to enunciate to others why a particular move is best, or to explain why one position is better than another and by how much. In Go the experts will often say that one side has an advantage because their men have "better shape" - a somewhat vague concept which strong Go players understand but which they are unable to explain or quantify for the benefit of programmers. And if Go programmers cannot extract information from the minds of expert players, how could they possibly make rapid progress towards a world champion beating program? Hence my prediction of 2035.

Well, I was wrong. I had failed to take into account the fact that Londoner Demis Hassabis, a genius games player, would decide to set his sights on computer Go. Demis rocketed to fame last year when he sold his A.I. company Deep Mind to Google for £400 million, and was named by the Evening Standard as the second most important person in London, after Boris Johnson. Demis and his team have developed a Go program called AlphaGo which late last year won a match against the European Champion, Fan Hui, crushing its opponent by 5 wins to nil. And next month AlphaGo takes on the world's strongest player, Lee Se-dol in a match in Soeul, which can be followed live on YouTube. How have they done it?

Hassabis and his team have achieved their program's astounding success at the Go board by using "deep learning", a method of machine learning in which the "teaching" is accomplished by presenting the software with massive amounts of data, and programming the system to generalize from that data - to teach itself how it should "think" about positions on the Go board. What Hassabis's team did in Go was to feed their program more than 300 million positions from master games, together with the moves played by the master players in those positions. By generalizing from those positions and moves AlphaGo learned enough about the game, about how to evaluate a Go position, that when combined with a look-ahead technique called Monte Carlo enabled the program to crush the European Go Champion.

So I was wrong, but at least this time I didn't bet on my prediction.

The next big thing? Take a look here you might be surprised...

http://igg.me/at/vegaplus/x/9227470

or here

https://www.youtube.com/watch?v=cqXhtTYj7Oc

Close

What's Hot