Post a Comment Print Share on Facebook

Artificial Intelligence: Ever more powerful, AlphaGo now learns without human data

The new version of the program of Google DeepMind, which had beaten the best professionals of the game of Go, shows itself much more powerful than its predecessor.

- 334 reads.

Artificial Intelligence: Ever more powerful, AlphaGo now learns without human data
Anor new exploit for AlphaGo. After becoming in 2016 first computer program capable of beating human to go, by crushing best players in world, software of Google DeepMind has still gained in skills. In an article to be published, Thursday, October 19, in prestigious scientific journal Nature, creators of AlphaGo announce that y have developed a considerably more powerful version of ir program, and above all, who is able to learn to play "without Know anything about go game, y explain on ir blog.

If, in order to function, AlphaGo learned by basing itself on millions of examples of parts played by humans, AlphaGo Zero – The name of new version – does not need any example. The only information available to program, based on a network of artificial neurons, is rules of game and position of black and white stones on plateau. From re, to learn, program will play millions of parts of parties against itself. With random shots at first, before refining, game after game, his strategy.

And result is without appeal: After three days of training only, this program beat 100 to 0 AlphaGo Lee, version of program that had, in March 2016, succeeded historical prowess of beating 4-1 South Korean Lee Sedol, n considered best Player in world. AlphaGo Lee had however required months of training and 30 million of parties. AlphaGo Zero will have needed "only" 4.9 million of games played against himself to crush AlphaGo Lee. To beat AlphaGo Master, a more powerful version of AlphaGo, which in particular had floored in May 2017 world number 1 Ke Jie, 40 days of training were needed. Moreover, AlphaGo Zero requires much less computer resources to operate than its predecessors.

Read our explanations: The revolution of artificial neurons

A method limited to certain areas

"This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by limits of human knowledge," researchers explain. Instead, she is able to learn from scratch with best player in world: AlphaGo himself. This "reinforcement" learning method, mingled with or technical optimizations of DeepMind, is refore more effective than previous one, which combined "supervised" learning (based on parts played by humans) and Learning by strengning.

"It's really impressive," says Tristan Carter, professor at Université Paris-Dauphine, specialist in programming of games at Laboratoire Lamsade. "It's amazing that he manages to learn as well from se minimum entries." This is a very good news for artificial intelligence: we will be able to apply it to a lot of different problems, because it is a very general and very powerful method. DeepMind's researchers evoke possibility that it can be used in areas as diverse as energy consumption reduction, design of new materials or folding of proteins.

However, method devised by DeepMind cannot be applied to all problems that artificial intelligence faces, far from being able to do without human data in a considerable number of cases. "In order to apply this method, framework must be very well defined, have a strong representation of domain, not too much blur in rules, and problem is well-defined." This applies well to game because re is a perfect knowledge of environment, rules, and that re is little unforeseen, explains Tristan Carter. So go lends itself perfectly.

The beauty of thing is that AlphaGo Zero discovers new knowledge of go. He has found only classical sequences that everyone knows, but humans have taken thousands of years to find. It took him three days. And he finds some original and relevant things that we never discovered. "DeepMind has a dream team"

What to give more thought material for professionals of go, who dissect with interest parts made by different versions of AlphaGo. Some of shots played by program, which had thrown Lee Sedol or Ke Jie, continue to intrigue fans of this extremely complex game, invented in China about 3 000 years ago. In high level competitions, plays are more and more inspired by AlphaGo – even though logic of a part of m always eludes players.

READ ALSO: How AlphaGo has transformed artificial intelligence and game of Go

On side of researchers in artificial intelligence specialized in game of Go, as Tristan Carter, who has been working for years on this subject, excitement overtakes discouragement. "It's very motivating, on contrary!" They found an elegant solution to a difficult problem. We want to do same program, to study it, to apply it to something else ... And salute performance of DeepMind, who accomplished in just a few months of significant progress: "They worked in record time, y were original and creative ..." They have a dream team, at forefront and very motivated. »

Headquartered in London, DeepMind, a company specialized in artificial intelligence, was bought in 2014 by Google, four years after its creation. His historic victory in game of go gave him immense visibility, but company is working on or issues, especially in area of health. For example, it has signed several partnerships with London hospitals, to facilitate mapping of area to be treated in head and neck cancer, or to create an application that is supposed to help hospital staff detect as many cases as possible Acute renal failure. The latter partnership also earned him a number of criticisms, after transfer of data of 1.6 million patients, without m being sufficiently informed.

If DeepMind announced in May that AlphaGo would no longer participate in competitions, his ambitions in field of game do not stop re: The company is now concentrating on video game Starcraft 2, which imposes new challenges to world of intelligence Artificial.


You have to login for comment. If you are not a member? Register now.

Login Sign Up