Building an Artificial Intelligence for Risk: Part 1 — Minimax and the Branching Factor

Christian Sadi
5 min readNov 21, 2017

--

“I want to see this by Monday,” Ben Wilhelm said, an instructor at Fullstack Academy in Chicago. Fullstack Academy, where I’m currently studying, is a coding bootcamp focusing on full stack JavaScript development. We’re in the last few weeks of our 3 month program and working on our final projects.

“Wait, Monday?” I asked, a little surprised at the tight deadline. That’s only 4 days, assuming I work on it my entire weekend.

“Yeah, show me a proof of concept for your artificial intelligence.”

He’s got this smirk on his face that simultaneously broadcasts, “I know you’re capable of doing this,” and “Game time. Show me what you’ve got.”

Now, if you know anything about me, you know I will rarely back away from a challenge. Even for reasons that, in retrospect of course, may sometimes be hard to rationalize. I’ll take your $100 bet on this pool game, even though I know I’m too drunk to play well, because drunk me knows that I’m actually the best pool player in the world. I might learn someday — but each time I’m definitively positive that today is not that day.

We had decided on building an online multiplayer game based on Risk, the Hasbro game. We figured one of our stretch goals would be to implement an artificial intelligence so that you could play the game on single player mode.

Stretch goals.

Stretch.

As in: not integral to the initial product we were building.

But Ben did as he does and pushed us out of our comfort zone, as any good teacher will do.

“Oh, we can totally make that happen,” I said confidently.

Narrator: He did not know if they could make that happen.

I didn’t know anything about artificial intelligence that I hadn’t seen in Will Smith’s ‘I, Robot,’ or that crazy movie ‘Ex Machina,’ but luckily, my best friend Google has my back and led me to this article:

At its most basic implementation, the minimax algorithm uses a function to choose the best possible moves out of all given moves in a particular turn over a series of turns. A ‘perfect’ implementation solves all the way to all possible end outcomes of win and loss and uses these outcomes to weigh the various moves as more or less favorable than others. So for Tic Tac Toe, this algorithm would calculate all the differing moves players could make across all of the turns until the win/loss/draw outcomes and makes the best move every turn.

The difficulty level of implementing an artificial intelligence for a game is usually rated by the branching factor of a game. The branching factor refers to the number of potential moves at any turn. To put this game into context, let’s look at the branching factors for some other games and then calculate the one for ours.

Connect Four — branching factor of 7

Chess — average branching factor of 35

Go — average branching factor of 250

AlphaGo, developed by Google’s DeepMind, was the first AI to beat a professional Go player. It did it in 2015.

Now let’s calculate the branching factor for our game assuming a player has 10 territories:

Game logic and mechanics: Our game, Domination, is played on a hex grid. Each hex is called a territory and will have anywhere from 1–15 units on it. During each turn, there are 3 phases. In the allotment phase, a player gets to place newly generated units on his own territories. In the attack phase, a player gets to launch at most one attack against an adjacent territory for each territory that he owns. In the fortification phase, a player may take units from one of his territories and move to, or fortify, another adjacent territory.

Each territory has 6 adjacent territories. A territory can attack an adjacent territory, fortify an adjacent territory, or do nothing. That’s attack (6) + fortify (6) + do nothing (1). A branching factor of 13 isn’t too bad for a first crack at developing a game AI.

Wait. Each territory has those same 13 moves. So now it’s 13¹⁰.

But wait. What if you have two territories next to each other? If one of those territories attacked a hex that was next to the other territory, that would change the move set of the other territory. Given this, we now know that the order in which we make our moves matters. We now need to calculate all the different permutations of territories. The number of permutations of 10 territories can be calculated as 10!, or 10 factorial.

Okay. So now we’ve got 10! (13¹⁰). What does that come out to?

5.002609e+17

That’s like, way bigger than a billion. And a billion is way bigger than 250. And 250 is the branching factor for Go which Google’s DeepMind just solved two years ago. And that’s just for one turn. In less than 6 turns, we have a combination of moves that reaches a googol. The number googol, not the company. The number 1 with a hundred zeroes after it. That one.

And that isn’t even taking into account that for every attack, there’s a ton of combinations of dice rolls that have to be taken into account for every unit in an individual attack. There’s a number of outcomes that can happen as a result.

We now know that this problem is multiple orders of magnitude larger than anything an AI has currently solved and that creating a perfect minimax algorithm isn’t feasible without a number of years of research and access to a supercomputer at the least.

The next parts of this article series will discuss how we broke down the big problem of AI into smaller chunks and built a computer player for our game that 1) doesn’t take years to calculate its moves, and 2) can play competitively against a real human player.

Part 2 of this article in particular will focus on the dice combinatorics and the algorithms that helped us turn all of the different die roll outcomes across all unit match ups into a single matrix that returns the probability of winning that battle. We’ll also explain what an expectiminimax algorithm is and why we’re using one.

Christian Sadi

--

--

Christian Sadi

Fullstack Javascript Developer, likes beer, React/Redux, talking about Goals, and helping people