Google’s computer programme AlphaGo beat the world’s top-ranked player in the ancient Chinese board game Go on Tuesday, re-affirming the arrival of what its developers tout as a ground-breaking new form of artificial intelligence.
AlphaGo took the first encounter in a three-game series against brash 19-year-old Chinese world number one Ke Jie, who before the game called the programme a “cold machine” and vowed never to play it again after this week.
AlphaGo, which was developed by London-based AI company DeepMind Technologies, stunned the Go community a year ago when it trounced South Korean grandmaster Lee Se-Dol four games to one — the first time a computer programme beat a top player in a full contest.
The win over Lee was hailed as a technology landmark, fuelling visions of a brave new world of AI that can not only drive cars and operate “smart homes”, but potentially help mankind figure out some of the most complex scientific, technical, and medical problems.
Last year’s win set the stage for a much-anticipated contest with Ke in the eastern Chinese city of Wuzhen.
Ke, who has been the top-ranked player in the 3,000-year-old game for more than two years and has previously described himself as a “pretentious”, boldly said last year “Bring it on!” and vowed never to lose to a machine.
Computer programmes have previously beaten humans in cerebral contests, starting with IBM’s Deep Blue defeating chess grandmaster Garry Kasparov in 1997.
But AlphaGo’s success is considered the most significant for AI yet due to the complexity of Go, which has an incomputable number of move options, putting a premium on human-like “intuition”, creative thinking and the ability to learn.
Ke has alternated between awe and disdain for AlphaGo.
On Monday night, Ke said on China’s Twitter-like Weibo platform that “the advancement of AI has far exceeded our imagination” but added he would never play it again after this week.
“It will always be a cold machine. Compared to humans, I cannot feel its passion and longing for the game of Go,” he said.
Go involves two players alternately laying black and white stones on a grid, seeking to seal off the most territory.
Proponents had considered Go a bastion in which human ingenuity would remain superior, at least for the foreseeable future.
AlphaGo uses two sets of “deep neural networks” containing millions of connections similar to neurons in the brain.
It is partly self-taught — having played millions of games against itself after initial programming.