Predicting the Winner of League of Legends Games

Timothy Carroll
Analytics Vidhya
Published in
5 min readAug 26, 2020

League of Legends(LoL) is a massively popular multiplayer online battle arena (MOBA) video game developed by Riot Games. In LoL, players take control of unique “champions” with different abilities and traits and fight against another team of champions(5v5), minions, and monsters. The goal of LoL is to destroy the enemies “Nexus”, a structure nestled deep within the enemies base, guarded by towers and minions. The champions start off weak at level one, and gain experience and gold by killing minions and enemy champions. This increases their level and allows them to buy items that further increase their strength.

I myself am a very bad League of Legends player. As a result of this, I set out to see if , through machine learning, I could predict which team would win. With this info, I’d be able to better understand what contributes to a victorious team. Additionally, I could determine when a game was no longer statistically winnable, and I was better off voting to forfeit.

The Process:

I obtained a dataset from Kaggle that contained the first 10 minutes of stats from close to 10,000 highly ranked(Diamond I to Master) games. It is important to note that in higher ranks, the first 10 minutes of a game are highly indicative of who will win. Games tend to have a longer average length at lower ELOs (elo is the ranking system).

The datasets contain 38 features pertaining to both the red and the blue team. Some of these features include the teams’ kills, gold, and experience.

Prior to building my model, I first established my target vector as whether blue wins or not. This is a classification problem, as they can either win or lose. I then started with exploratory data analysis, and some wrangling of my dataframe. In this process, and with the help of some previous LoL knowledge, I realized that many of the features are collinear, which can cause problems with our model. After removing our problem columns, I also dropped features with little correlation to my target vector. Pictured here is my correlation matrix before dropping.

Correlation Matrix of Features

My next step was to split my dataframe into training, testing, and validation sets. The reason for this is so that my model has data to learn from (training), data to validate and tune with (validation), and finally testing data to test our final model on. I also stratified my data when splitting, accounting for a skew with too many wins or losses in the training set. Using my split data, I was able to get a baseline score of .501. This score is important as it establishes the probability of correctly guessing whether a team won or lost if we were to simply guess. We now have a starting point in which to build on.

I then created multiple models, starting with simple versions of different types. I used DecisionTrees, RandomForests, and XGBoost (a gradient booster). After creating simple versions of the models and testing them on my validation data, I then used Randomized and Grid Searches to improve them. These techniques are used in order to find the best set of hyper-parameters for the model. After tuning, I was able to obtain validation scores of 0.7258 and 0.726 for RandomForest and XGBoost, respectively. I decided to use the XGBoost as my final model, and tested it on my held out test set, getting a score of 0.7314. Not bad!

I then wanted to better understand my models and how they functioned. I first wanted to visualize the difference in my tuned and untuned models. To demonstrate this, I graphed the ROC curves of both. This plot illustrates the discrimination threshold by plotting the true positive rate vs the false positive.

ROC Curve of untuned(blue) vs tuned(red) XGBoost models.

The tuned model generally had better true positive rates with the same false positive rates as the untuned model, but at certain points the perform quite similarly. Linked is the confusion matrix showing the final model’s predictions on the validation data.

So what features most heavily contributed to my final models predictions? To find out, I first generated was a shapely plot. Shapely plots are great for explaining the contribution of each feature when making a prediction. I chose the 7188th match in my training set to demonstrate this. In this match, the blue team ended up wining, and 10 minutes in, they had 11 kills and 3 deaths. The blue team also had the lead in gold, experience, and CS (minions killed).

Shapely Plot for the 7188th match in my data

As seen here, the gold and experience features are pointing towards a blue win. These features are the most important in making this particular prediction of a win. However, red has a dragon kill where blue does not, and killing dragons grants bonuses for the team that secures them. This is why the the “reddragons” feature is pointed to the left, as it reduces the models confidence of a win. It is however much less important then the red counterparts.

Partial Dependence Plots for blue experience and blue gold differences

These plots show that the higher the difference in gold and experience, the higher the relationship to the target. It does however plateau around 2500, mostly because it’s near impossible to have a larger spread then that 10 minutes into a game. These features can also be displayed in a matrix together.

This shows us that a high gold and experience difference together heavily affect the outcome of our prediction. This makes sense as gaining gold and experience are both very important, and very correlated in LoL.

The last visualization I made was permutation importance of features for my final model. This technique shuffles the feature values and measures the loss in score to determine how much this feature is depended on to make predictions.

Permutation importance for final XGBoost Model

As you can see the difference in gold between the two teams was by far the largest deciding factor. This isn’t surprising, as having high amounts of gold enabled players to buy items, making champions stronger.

After going through the model and studying the importance of the features, I can confidently say securing gold is the most important aspect of winning games of LoL at this skill level, so focus on your CS, and watch for ganks!

Sources:

Data: Kaggle

Model and Visualizations: Github

My Website: Here

--

--