Game over! Computer wins series against Go champion
programme, AlphaGo, took a little over four hours to secure its third consecutive win over Lee Se-Dol
SEOUL:
A Google-developed computer programme won its best-of-five match-up with a South Korean Go grandmaster on Saturday, taking an unassailable 3-0 lead to score a major victory for a new style of "intuitive" artificial intelligence (AI).
The programme, AlphaGo, took a little over four hours to secure its third consecutive win over Lee Se-Dol -- one of the ancient game's greatest modern players with 18 international titles to his name.
Lee, who has topped the world ranking for much of the past decade and had predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash in the two remaining dead rubbers on Sunday and Tuesday.
"I don't know what to say, but I think I have to express my apologies first," a crestfallen Lee told a post-game press conference.
"I apologise for being unable to satisfy a lot of people's expectations. I kind of felt powerless," Lee said, acknowledging that he had "misjudged" the computer programme's abilities.
"Yes, I do have extensive experience in playing the game of Go, but there was never a case where I was under this much pressure.... and I was incapable of overcoming it," he added.
Google self-driving car effort expands hiring, posts manufacturing jobs
For AlphaGo's creators, Google DeepMind, victory went way beyond the $1.0 million dollar prize money, to prove that AI has far more to offer than superhuman number-crunching.
'Stunned and speechless'
"To be honest, we are a bit stunned and speechless," said a smiling DeepMind CEO Demis Hassabis, who stressed that Lee's defeat in Seoul should not be seen as a loss for humanity.
"Because the methods we have used to build AlphaGo are general purpose, our hope is that in the long-run we will be able to use these techniques for many other problems," Hassabis said.
Applications might range from making phones smarter to "helping scientists solve some of some of the world's biggest challenges in health care and other areas," he added.
Previously, the most famous AI victory to date came in 1997 when the IBM-developed supercomputer Deep Blue beat Garry Kasparov, the then-world class chess champion, in its second attempt.
But a true mastery of Go, which has more possible move configurations than there are atoms in the universe, had long been considered the exclusive province of humans -- until now.
'Human-like' approach
AlphaGo's creators had described Go as the "Mt Everest" of AI, citing the complexity of the game, which requires a degree of creativity and intuition to prevail over an opponent.
AlphaGo first came to prominence with a 5-0 drubbing of European champion Fan Hui last October, but it had been expected to struggle against 33-year-old Lee.
Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI -- something resembling human reasoning based on a variety of inputs and, crucially, self-learning.
In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.
The 3,000-year-old Chinese board game involves two players alternately laying black and white stones on a chequerboard-like grid of 19 lines by 19 lines. The winner is the player who manages to seal off more territory.
AlphaGo uses two sets of "deep neural networks" that allow it to crunch data in a more human-like fashion -- dumping millions of potential moves that human players would instinctively know were pointless.
It also employs algorithms that allow it to learn and improve from matchplay experience.
Google co-founder Sergey Brin, who was in Seoul to witness AlphaGo's victory, said watching great Go players was like "watching a thing of beauty."
"I'm very excited we've been able to instill this kind of beauty in a computer," Brin said.
A Google-developed computer programme won its best-of-five match-up with a South Korean Go grandmaster on Saturday, taking an unassailable 3-0 lead to score a major victory for a new style of "intuitive" artificial intelligence (AI).
The programme, AlphaGo, took a little over four hours to secure its third consecutive win over Lee Se-Dol -- one of the ancient game's greatest modern players with 18 international titles to his name.
Lee, who has topped the world ranking for much of the past decade and had predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash in the two remaining dead rubbers on Sunday and Tuesday.
"I don't know what to say, but I think I have to express my apologies first," a crestfallen Lee told a post-game press conference.
"I apologise for being unable to satisfy a lot of people's expectations. I kind of felt powerless," Lee said, acknowledging that he had "misjudged" the computer programme's abilities.
"Yes, I do have extensive experience in playing the game of Go, but there was never a case where I was under this much pressure.... and I was incapable of overcoming it," he added.
Google self-driving car effort expands hiring, posts manufacturing jobs
For AlphaGo's creators, Google DeepMind, victory went way beyond the $1.0 million dollar prize money, to prove that AI has far more to offer than superhuman number-crunching.
'Stunned and speechless'
"To be honest, we are a bit stunned and speechless," said a smiling DeepMind CEO Demis Hassabis, who stressed that Lee's defeat in Seoul should not be seen as a loss for humanity.
"Because the methods we have used to build AlphaGo are general purpose, our hope is that in the long-run we will be able to use these techniques for many other problems," Hassabis said.
Applications might range from making phones smarter to "helping scientists solve some of some of the world's biggest challenges in health care and other areas," he added.
Previously, the most famous AI victory to date came in 1997 when the IBM-developed supercomputer Deep Blue beat Garry Kasparov, the then-world class chess champion, in its second attempt.
But a true mastery of Go, which has more possible move configurations than there are atoms in the universe, had long been considered the exclusive province of humans -- until now.
'Human-like' approach
AlphaGo's creators had described Go as the "Mt Everest" of AI, citing the complexity of the game, which requires a degree of creativity and intuition to prevail over an opponent.
AlphaGo first came to prominence with a 5-0 drubbing of European champion Fan Hui last October, but it had been expected to struggle against 33-year-old Lee.
Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI -- something resembling human reasoning based on a variety of inputs and, crucially, self-learning.
In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.
The 3,000-year-old Chinese board game involves two players alternately laying black and white stones on a chequerboard-like grid of 19 lines by 19 lines. The winner is the player who manages to seal off more territory.
AlphaGo uses two sets of "deep neural networks" that allow it to crunch data in a more human-like fashion -- dumping millions of potential moves that human players would instinctively know were pointless.
It also employs algorithms that allow it to learn and improve from matchplay experience.
Google co-founder Sergey Brin, who was in Seoul to witness AlphaGo's victory, said watching great Go players was like "watching a thing of beauty."
"I'm very excited we've been able to instill this kind of beauty in a computer," Brin said.