SimKiev

Ben Steenhuisen
datdota
Published in
4 min readApr 20, 2017

--

One of the most important aspects of doing macro-predictions about a large and complex event such as the Kiev Major is trying to break down the event into a series of smaller sub-predictions that make sense to model, and are easier to evaluate, and then building those back up to a model to simulation your original question. It’s very difficult to answer even a seemingly simple question like “which hero is going to be picked the most?” without first analyzing how many matches we expect each team to play, which teams will enter the Playoff Bracket as a top seed, or even how many best-of-3’s will go 2–0. Almost all the Compendium Predictions are affected by these very important sub-predictions — so let’s look at an example problem: ‘what is the expected prize money value for each team going into the event?’.

One of the more well known methods for solving this problem (and one that long-time datdota blog readers would be well aware of) is a Monte Carlo method, using pseudo-random sampling experiments repeated many times to simulate the underlying problem. Here we’ll use the Glicko2 rating of each of the 16 teams (obviously we use the ratings of the previous organizations for the 4 rosters that [annoyingly] swapped organizations before the event) to establish head-to-head expected win percentages and a 10⁶ iteration Python loop to handle the Monte Carlo aspect.

We tracked only a few key statistics in each loop most importantly where a team placed in the Swiss group stage and where they placed in the main bracket. We also looked at the most frequent Swiss finishing position, and what percentage of the time the simulation had the team finishing with that exact score. Calculating the expected prize money then became some simple arithmetic based on the expected finishing positions.

Monte Carlo Simulation Output

What do we see from the models output? Well — at first probably that OG is the outright favourite [1] to win the event happening 31.99% of the time (odds of 1:3.22 would have been too nice to avoid), with IG behind them at 21.96%. We also see that the model suggests Team Random (formerly Wings) are going to have a poor performance, dropping out first round in 77.95% of the simulations. Mousesports (formerly Ad Finem) are in a similar state.

The Frequent Score Frequency also suggests that it’s more likely than not that Faceless, Secret, Thunderbirds, VGJ and TNC end on 2–2 in the Swiss round, and 44.74% that OG make a clean run (the model also has them at 39.51% to end 3–1).

Now, this first iteration of the model is obviously a bit off in some aspects: even the most anti-Brazilian pundits wouldn’t put SG-esports at 1:7692 (Pinnacle has them at 331.550 to win, 23x higher than our model), and OG’s ~32% chance also seems a bit high (Pinnacle has them at 4.950, which translates to ~20.2%). This is pretty common within Monte Carlo’s, where pre-simulation assumptions are held even when events within the simulation should suggest change, for example a team building (or losing) a lot of momentum which isn’t accounted for; or coming into the event with some excellent strategies that have yet to be revealed, and take time to develop counters. This additional, hidden information ends up cooling down the simulation results — most teams performing slightly closer to the mean than the model would suggest.

Two other features which could also be added to the model are regional strength and head-to-head history. Different regions have slightly different skill distributions and median skills, yet are compared equally when using a common rating system like Glicko2. The impact of this is reduced by having lots of frequent inter-regional events (like the Majors), but still overvalues weak regions (most readers would point at the SEA teams here, and I believe they’re right — Faceless’s Glicko2 history has been near the top 5 for most of the post-TI6 period despite them never winning a bo3 or bo2 series against non-SEA teams), and undervalues highly competitive regions (CIS & China probably are the most egregious cases). A head-to-head coefficient could also account for some of the quirky team-team dynamics we’ve seen in the history of professional Dota 2.

The true power of the Monte Carlo comes in where you can now expand your model easily to answer additional questions such as “what are the most likely finals pairings?” (beyond OG vs {iG, Liquid, Newbee} it’s somewhat boringly Liquid vs iG) or “how many games do we expect to see in the main event?” (37.4).

` Noxville

[1]: edited to reflect a correction as per a Tweet from a reader

--

--

Ben Steenhuisen
datdota

Dota 2 statsman and occasional caster | runs @datdota