Google computer beats human champion in ancient Chinese game

London: For the first time, a Google computer programme has beaten a human champion of the 2,500-year-old complex Chinese game of Go, in an event seen as a milestone for artificial intelligence.

Google DeepMind’s AlphaGo beat the reigning three-time European Go champion Fan Hui – who has devoted his life to the game since the age of 12 – five games to nil in October last year in London. The first game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952. In 1997, IBM’s Deep Blue computer famously beat Garry Kasparov at chess. But to date, Go has thwarted artificial intelligence (AI) researchers; computers still only play Go as well as amateurs.

The game involves players taking turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory. However, as simple as the rules are, Go is a game of profound complexity. Invented in China over 2,500 years ago, Go is played by more than 40 million people worldwide.

The number of possible positions in the games are more than the number of atoms in the universe. This complexity makes Go hard for computers to play, and therefore offers a challenge to AI researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.

Traditional AI methods – which construct a search tree over all possible positions – do not have a chance in Go. So when we set out to crack Go, we took a different approach. We built a system, AlphaGo, that combines an advanced tree search with deep neural networks, Demis Hassabis, the chief executive of Google DeepMind wrote in a blog post. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections.

We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 per cent of the time (the previous record before AlphaGo was 44 per cent), Hassabis said. But our goal is to beat the best human players, not just mimic them, he said. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.

After all that training it was time to put AlphaGo to the test. First, we held a tournament between AlphaGo and the other top programmes at the forefront of computer Go, said David Silver of Google DeepMind, the lead author of the study. AlphaGo won all but one of its 500 games against these programmes. The researchers then invited Hui for closed-doors match, where AlphaGo won by five games to nil. In March this year, AlphaGo will face a five-game challenge match in Seoul against the legendary Lee Sedol – the top Go player in the world over the past decade. The research was published in the journal Nature.

PTI