In an advancement no one thought was coming for decades, Google has developed an artificial intelligence capable of beating champion-level players at Go.

Chris Wiltz

February 18, 2016

5 Min Read
New Google AI Beats Humans at The World's Most Complex Board Game

Artificial intelligence is beating us all of the time lately. AIs kill us in video games, they've beaten top chessmasters, won at Jeopardy, and will probably be much better drivers than all of us. Up until a few weeks ago we at least had the ancient game of Go to hide behind. But AlphaGo, a new AI project from Google, is putting an end to that.

"[Go] is played primarily through intuition and feel, and because of its beauty, subtlety, and intellectual depth it has captured the human imagination for centuries," Demis Hassabis, head of the Google DeepMind lab, which created AlphaGo, wrote in a blog post. "...This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans."

For those unfamiliar, Go is a board game invented in China over 2,500 years ago (making it centuries older than chess). Players using white or black stones place them on a 19 x 19 (standard size) grid board. The aim is to control the board by capturing and controlling territory by surrounding your opponent's stones. The rules of the game are deceptively simple, but the size of the game board, coupled with the fact that players are allowed to place stones anywhere on the board that makes sense, creates a game scenario with an astronomical number of possible game variations (10700 is the most common estimate). By contrast, computer scientists have estimated chess to have a still staggering (but much more comprehensible) number of variations in the billions.

The common approach to creating AI for a computer game is to create a search tree. That is, to give the machine a set of pre-programmed responses based on the player's action (i.e if player does X, then machine does Y). Imagine someone sitting down at a card table with a hundred of those "How to Play Blackjack" cards and you get the idea. This works fine for simpler games like tic-tac-toe and checkers, but Go is simply too complex for this approach to work in a way that's challenging to all but the most novice players.

READ RELATED ARTICLES ON DESIGN NEWS:

To overcome this, the team at Google used an advanced search tree technique called Monte Carlo tree search (MCTS). In a nutshell, MTCS works by randomly sampling the moves in a game, finding the ones that yield the best outcome, and making sure those moves are more likely to be chosen in the future. This process is implemented alongside deep neural networks to create a system capable of making the sort of complex decisions necessary for success at Go. "These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections," Hassabis wrote. "One neural network, the 'policy network,' selects the next move to play. The other neural network, the 'value network,' predicts the winner of the game."

Once the system was devised, the next step for Google engineers was to teach it how to play Go by having it watch human experts play each other -- analyzing 30 million moves in the process .Once AlphaGo was able to predict winners of human games with 57% accuracy, the next step was to have it dive in to a play the game itself. Using the computing resources of Google's Cloud Platform, AlphaGo competed against other popular Go AI programs and also learned by playing against itself ... but unlike Joshua in WarGames it probably won't conclude that the only winning move is not to play. 

According to a full study on AlphaGo published in the journal Nature, AlphaGo won 494 out of 495 matches with other programs, even with a handicap. But the crowning achievement was its winning five out five games when pitted against Fan Hui, the European Go champion for 2013, 2014, and 2015, in a closed-doors match last October. It was the first time a machine had beaten a professional Go player. For its next challenge AlphaGo will set its sights on Lee Sedol, the number-one ranked Go player in the world, in a match set to take place March 9-15 and be live streamed on YouTube. "I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger,” Sedol told VentureBeat. "But I am confident that I can win at least this time."

"If we win the match in March, then that's sort of the equivalent of beating Kasparov in chess," Hassabis said in a press briefing. "Lee Sedol is the greatest player of the past decade. I think that would mean AlphaGo would be better than any human at playing Go. Go is the ultimate game in AI research."

Google isn't the only company going after the AI milestone. Just hours before the AlphaGo announcement in late January, Facebook posted a blog and updated video about its own efforts into creating a Go-playing AI. Facebook's aim in doing this isn't immediately clear (online gaming perhaps?), but the company's achievement (and the power of its AI) was overshadowed by Google's.

Chris Wiltz is the Managing Editor of Design News

[Image via Wikimedia Commons / By Donarreiskoffer - Self-photographed, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=43383]

Sign up for the Design News Daily newsletter.

You May Also Like