LoL: The Results Are In - Champion Tier List Performance for Summer 2022

How Solo Queue data was used to consistently predict Champion’s performance in professional play

Jack J
The Esports Analyst Club by iTero Gaming
5 min readAug 25, 2022

--

Here is the timeline of the Champion Tier List Challenge story so far:

My firm advice is that you read at least the article introducing the challenge before going any further, but I’m not your mother.

Here’s the TLDR for those who don’t like being told what to do:

  • The challenge is for me to use solo queue data to create a Champion tier list for pro play. The tier list is released just before the first professional game is played on that patch.
  • After every patch, I check how each Champion performed and average their results for each tier.
  • I would win the challenge (and prove the point) if, across the whole regular season, the Champions that the model have assigned “S-tier” will outperform the Champions assigned “A-tier”, and “A-tier” outperform “B-tier” and so on…

p.s. if you skipped the previous article you may be wondering what counts as “pro play”; it’s every game in the 2022 Summer Regular Season (no play-offs) from LCK, LPL, LEC and LCS.

Today, that challenge comes to an end (it actually ended last week after the final game of regular season but I was too busy launching the iTero Drafting Coach to write this article).

To those new to the challenge, here’s an example of what an individual patch results looks like (12.13):

Champion Tier List Results for Patch 12.13

Each tier contains the Champions that our solo queue model assigned there BEFORE the patch was played professionally, and then how they perform on stage.

For instance, in the “DON’T” tier we have Aatrox, who played 4 games and won 25% of them. In total, the Champions in this tier were played 81 times for a total win rate of 44.4% and a Multiplier of 0.88x (this value is explained in detail in the previous article). Compare that to the “S-tier” where the Champions within it won 55.1% of their games with a 1.07x multiplier.

Now, this is just one patch, and that doesn’t give us enough data. So, without further ado, here is the final accumulated result for all the tier lists across the Summer 2022 regular season:

Final Results of the Tier List Challenge from the 2022 Summer Regular Season (LCK, LCS, LEC, LPL)

As you can see, on average across the season, every single tier averaged a better result than the one below it.

Challenge = Complete.

Over the 3,411 Champion games predicted it would be INCREDIBLY statistically unlikely that I simply lucked into it. And so, we must accept that it is indeed possible to use solo queue data in professional play.

However, nothing is ever perfect. Yes, it’s great that “S-tier” outperformed “D-tier”, but the margins are pretty small. 53.1% to 47.6%. There isn’t much in it.

There were a handful of Champions that were played a lot and were consistently miss-Tiered. Look at Yuumi in Patch 12.13, “C-tier” with a 54.2% win rate over 24 games. She was consistently tiered as C/D, and consistently won games. How about Lissandra? 44.4% over 18 games, yet we put her as “S-tier”.

Each patch I reviewed the results and there was a consistent reason why these Champions were missing the mark:

How the Champions fit the pro meta

This really is the fundamental difference between solo queue and professional play, the metas are often vastly different. That means that all the signs from our solo queue analysis could point to a certain Champion being dreadful, yet because they fit into the professional meta they end up overperforming.

Let’s take a simple example: Yuumi works extremely well with certain Champions. It’s unlikely in solo queue that if you pick the kitten, your team mates make their choice to optimise the synergy with you. They’ll just pick whatever Champions they always play. In professional play, coaches and players will purposefully choose certain Champion combinations that they know overperform together.

Let us imagine Yuumi had a 30% win rate in solo queue but it’s 70% when paired with Ezreal. However, Ezreal is picked in 10% of the Yuumi solo queue games. Then, professional players who are aware of this pairing will pick Ezreal in 80% of the Yuumi games. Our model will assume Yuumi is in a bad place, but she’s actually very strong as long as she’s paired with the right Champion!

This happened a lot. And it wasn’t just pairings. Maybe certain Junglers were bad in solo queue but very good at countering the pro-meta ADCs (who aren’t picked as often in solo queue). Maybe there was a top lane Champion that looked busted in solo queue but there was actually a very strong answer to it in pro.

This lesson has been learnt, and although I’m finished with these tier lists for this year — we’ll come back to the Spring Split more sophisticated and capable than ever!

For now, I hope that we can finally agree that although it is imperfect, solo queue data can be used to make assumptions about professional play.

You got to the end of the article! My name is Jack J and I’m a professional Data Scientist applying AI to competitive gaming & esports. I’m the founder of AI in Esports start-up iTero.GG, recently having launched the iTero Drafting Coach. You can follow me on Twitter, join the iTero Discord or drop me an e-mail at jack@itero.gg. See you at the next one.

--

--

Jack J
The Esports Analyst Club by iTero Gaming

I’m a professional Data Scientist applying AI to competitive gaming & esports. Founder of iTero.GG and jung.gg.