Predictions and Suggestions using Hero Embeddings in Dota — Part 1 — Previous Work and Gathering Data

This post is the first part of a series I’m writing on applying Machine Learning on Dota 2 context.


(This post assumes you’re familiar with Dota 2, if you’re not, you still can get the gist of the post knowing it’s a highly skill-based multiplayer game where 2 teams pick a couple of characters with unique abilities.)

In this series, I will use Machine Learning concepts such as embeddings and neural networks to predict match outcomes of Dota 2 games given the complete team compositions and to suggest heroes into partially completed picking stages.

Previous Work

Previous work on predicting match outcomes:

All of the links above provide the script and the dataset used to achieve their results. One thing to notice is that they’re all heavily skewed towards radiant wins.

Note that, as long as samples were picked uniformly throughout the games, having data favor a side is OK (as long as you’re only interested in overall accuracy, if you care about false positives etc. this may not apply). That being said, you notice improvements over a weighted coin flip is only about 9–10%.

As if this is not enough reason to strive for better performance, these studies used data from a couple of years back and Dota 2 is a much more balanced game right now, so having certain heroes in your team doesn’t imply an unusually high chance of winning/losing the game as it used to. Similarly, the win rate ratio of radiant to dire in recent pro matches is about 1, so the radiant advantage doesn’t hold either. As a result, same scripts and techniques, when applied to an up-to-date, balanced dataset of pro games, yield merely ~57% accuracy, not even low 60 percents.

By the end of this series, using hero embeddings and neural networks, we will accomplish 73% accuracy on a balanced dataset.

Previous work on suggesting heroes into a partially-completed team compositions:

  • Dotaedge
  • dotapicker.com
  • truepicker.com

When it comes to suggestions, things start to get a lot vaguer since it’s very hard to reliably measure the success of suggestions. Because you almost never have similar enough games to compare how other possibilities would pan out.

All suggestion engines above rely on the statistical advantage of heroes with and against other heroes. They basically add up each hero’s advantage/disadvantage towards each enemy hero.

This may not work well for two reasons, first is that two heroes who are relatively bad compared to a certain lineup can come on top when they’re picked together, since they may have a good synergy or combo together.

And well, Dota 2 is a game of possibilities, there are two teams of five players and 113 heroes to choose from. There are about (113 choose 5)*(108 choose 5)/2 = ~10¹⁶ possible match-ups. Granted most are very unlikely to take place at all, hence they’re not likely to be an input of our suggestion system. Still, we may want to have some kind of idea for those matches as well. And even for popular match-ups, most of the variations aren’t even likely to be in the dataset. Especially since the dataset needs to change frequently due to gameplay updates.

ML Ideas to Rescue

Solution: Representing heroes with numerous features rather than considering them an atomic unit.

Embeddings:

  • Represents similarities between heroes and makes substitutions easier
  • Naturally leads to “team vectors”, inherently helps to model synergies
  • Makes adapting to patches and gameplay changes easier
  • Requires fewer games to reliably suggest heroes

Let’s expand these points a bit,

Imagine a situation where you want a hero who fills a specific niche in a game but the hero is banned from the hero pool. For standard one-hot implementation to be reliable, dataset must contain variations of the match-up with that hero substituted with another. Preferably a good amount of them to overcome the noise. This presents a huge issue as chances of the dataset satisfying this criterion is not good. But when using embeddings, our model can mine information from the similar games where a similar hero is picked, so it can approximate the truth better.

The second point follows very closely to first one, “team vectors” of two similar teams are likely to be close to each other. Plus, by creating a meaningful team vector, similar to hero vectors, where each dimension represents features such as split push capabilities, vulnerability to physical damage etc. (although most dimensions are superpositions of human-readable features), it becomes easier for the following layer in our network to model synergies.

About Data

Starting from the scratch, we will train our hero embeddings using all professional matches in Dota 2 history. Yes, there are vast differences between old days of Dota and today, but this will be our starting point as we don’t need to go into huge troubles of eliminating sub-optimal pub matches (matches where one or more players feed, play bad or similar sub-optimal conditions are present, these are fairly common even in higher brackets). Plus, number of pro matches per season went up so much during recent years, recent matches dominate the scene. It’s almost like an exponentially weighted time average. Also, in the next stage, where we predict the match outcomes, we will initialize embeddings with the vectors we calculate at this stage and continue to train embeddings via backpropagation. So if we wanted to train the network for a new patch, we’d keep the model as is, because hero embeddings will be able to adapt the new data.

Retrieving Professional Game Data from OpenDota:

OpenDota is an “Open source Dota 2 data platform”. They provide stats which most Dota 2 related stats sites provide at a premium. We will be using their Data Explorer feature to retrieve pro match history.

If you visit Data Explorer, you’ll notice they provide an interface to access SQL code used to query. Here’s the default SQL code:

We will request information in a couple of batches. But first, we increase limit which is the number of elements returned (10 times number of matches returned). Then we order by _player_slot second to match_id. This ensures players are returned in the specific order such that first 5 players are always radiant players. We can also get rid of leaguename and account_id. As a final thing, we add a start_time constraint, at the very beginning of the job, it’s the current Unix time-stamp, and after each batch, we set it to last matches’ starting time.

(You can probably get away with pulling all info at one batch, there are only ~48K pro matches total)

Here’s the SQL we’re querying:

We initialize the start_time boundary to greater or equal to current Unix timestamp. And update it to the start time of the oldest match we got from the last batch, each time we are looking to fetch another batch. Before we go on any further, here’s a couple of Python lines using requests library to connect to given URL, then we will use returned object to retrieve and parse information:

OK, now we write a function to retrieve match data and put it in a format such that every match is represented by a tuple:

match tuple : ( match id, start time, radiant win, [ heroes ] )

At this point, I could’ve used a URL parser to parse the SQL code into URL GET data. Instead I copied a sample query URL over to a string for simplicity.

Finally, we pull the matches from OpenDota API and save them locally

This concludes the first part of this series. In the next part, we’ll actually go about training hero embeddings.

Like what you read? Give Talha Saruhan a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.