Google's AI Beat a Go Champion by Mimicking Human Intuition

Google's AlphaGo has bested Korean champion Lee Sedol. But can the new AI technology find a role outside of playing games?

Chris Wiltz

April 6, 2016

5 Min Read
Design News logo in a gray background | Design News

Google DeepMind's advanced artificial intelligence (AI) system, AlphaGo, has earned a prime notch in its belt, beating high-level champion professional Go player Lee Sedol four games to one. Earlier this month Google posted a comprehensive breakdown of AlphaGo's match against Sedol, complete with analysis and expert commentary.

Writing for the Google blog, Demis Hassabis, CEO and co-founder of DeepMind, said AlphaGo's win demonstrated that it's possible for an AI to come up with "global" solutions that humans might not see or consider. "This has huge potential for using AlphaGo-like technology to find solutions that humans don't necessarily see in other areas," Hassabis wrote. "... Because the machine learning methods we've used in AlphaGo are general purpose, we hope to apply some of these techniques to other challenges in the future."

The journal Nature has created a short documentary on the development of AlphaGo:

Go, an ancient board game created more than 2,500 years ago, is a deceptively simple and easy-to-learn game, but with astronomical levels of complexity. Players take turns placing black and white stones on the grid board, with the aim of surrounding their opponent's pieces. In terms of its variables, Go is to Chess what Chess is to Tic-Tac-Toe. And while AI programs have played the game at novice quality before, nothing has ever been able to play the game at a professional level. What AlphaGo has achieved has benchmarked AI development 10 years ahead of where computer scientists anticipated it would be at this point.

RELATED ARTICLES:

What's significant about AlphaGo is that Google was able to create an AI system that doesn't just guess or repeat moves in a game, it actually makes its decisions on a level that resembles human intuition. By having the system watch 150,000 games of Go, then repeatedly play itself, DeepMind researchers built a valuation system that combines search-and-optimization (used by the famous Deep Blue AI) along with a neural network capable of very accurately determining good board positions for its pieces. Essentially, AlphaGo doesn't just consider individual moves and pieces, it looks at the game from a big picture perspective. Whereas previous AI like Deep Blue are very good at guessing, AlpahGo is good at guessing and recognizing and applying patterns.

Writing for Quanta Magazine, Michael Nielsen, a computer scientist and research fellow at the Recurse Center in New York City, explained:

Since the earliest days of computing, computers have been used to search out ways of optimizing known functions. Deep Blue's approach was just that: a search aimed at optimizing a function whose form, while complex, mostly expressed existing chess knowledge. It was clever about how it did this search, but it wasn't that different from many programs written in the 1960s ... AlphaGo also uses the search-and-optimization idea, although it is somewhat cleverer about how it does the search. But what is new and unusual is the prior stage, in which it uses a neural network to learn a function that helps capture some sense of good board position. It was by combining those two stages that AlphaGo became able to play at such a high level.

It's easy to imagine the impact such a sophisticated system can have in manufacturing, particularly in the emerging area of predictive maintenance. Imagine a system capable of not only anticipating the need for machine maintenance, but also recognizing patterns in workflow or workload that could lead to repair conditions. Such a system could also increase factory efficiency by anticipating problem areas and informing workers on how to improve or avoid them.

There are still questions around AlphaGo's efficacy, however. In his article Nielsen talked about how easy systems like AlphaGo are to fool with only tiny variations in data sets. He also points out that AlphaGo is built on a plethora of data provided by humans (150,000 Go games is a lot of human work). As Nielsen wrote:

By contrast, human beings can learn a great deal from far fewer games. Similarly, networks that recognize and manipulate images are typically trained on millions of example images, each annotated with information about the image type. And so an important challenge is to make the systems better at learning from smaller human-supplied data sets, and with less ancillary information.

Others are questioning whether AlphaGo should be called AI at all. In an article published on IEEE Spectrum, Jean-Christophe Baillie, founder and president of Novaquark, a Paris-based virtual reality startup, said that without a robotics component to allow them to truly interact with the physical world, systems like AlphaGo will remain sophisticated software and nothing more. For Baillie, a system cannot acquire true intelligence without a body capable of manipulating, sensing, and experiencing the real world -- even if that experience is limited to placing stones on a Go board. "There is no AI without robotics," he wrote. "This realization is often called the ‘embodiment problem' and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues. Every different body has a different form of intelligence, and you see that pretty clearly in the animal kingdom."

But DeepMind researchers are pushing forward with AlphaGo. The MIT Technology Review has reported that AlphaGo researchers are now turning their efforts to card games, specifically poker and the fantasy card games Magic: The Gathering and Hearthstone. All of these games involve "imperfect information" that make them even more challenging for AI than Go. A poker player for example can make educated guesses, but doesn't have any knowledge of her opponent's hand (unless she's cheating). The player's success thus relies not only on her understanding of what the cards mean, but also on her intuition, ability to read other players, and, of course, her ability and willingness to bluff. Mastering a game in which it is ignorant of at least half of the information (the other player's hand) presents a whole new realm of obstacles for AlphaGo. A system created by David Silver, the lead researcher behind AlphaGo and Johannes Heinrich, a research student at University College London, has already achieved expert-level play at Texas hold 'em according to a paper published by the researchers.

Chris Wiltz is the Managing Editor of Design News

Sign up for the Design News Daily newsletter.

You May Also Like