The LumiRank Algorithm, and How We Got Here

Stuart98
22 min readJan 22, 2024

--

When medium needs you to put an image into the article to use as the thumbnail and it’s 7:30 am and you haven’t slepped in like 14 hours or something and you’re desparate

Introduction

The LumiRank algorithm is an iterative averaging based algorithm that assigns scores for events based on a players’ wins, losses, and outplacements, and averages those scores weighted by factors including the size of the tournament, the relation of the the performance at the event compared to other events the player has attended, and their overall attendance throughout the season. A full explanation of the algorithm’s features and design decisions follows, though it is useful to first go over the history of algorithms in smash, issues that have arisen in the past, and why it is we decided to go with an algorithm to begin with.

Part 1: Why an algorithm instead of a panel?

In short, because the way the Smash Ultimate scene is structured is far better suited to an algorithm than a panel.

Panel rankings have been in use for global rankings in the Smash Scene for a decade at this point. Notable examples include:

  • SSBMRank (2013-Present)
  • SSBBRank (2014)
  • PGRU Contenders (2021) and the PGRUv3 (2022)

A panel ranking called the X-Factor was also produced alongside the earlier algorithmic PGR rankings from 2016 through the end of 2019, though it was not the primary ranking and was instead used to compare players to their community perception.

All of the above panel rankings suffered from most of the following issues at some point, though as will be discussed later the modern SSBMRank appears to be largely free of them:

  • Panelists considering extra-bracket information such as games in friendlies in evaluating players
  • Panelists failing to look at tournament results when evaluating players, or only considering results from memory.
  • Players who were popular due to playing an obscure character or being from a distant region being ranked higher than other players with similar results.
  • Players whose best bracket runs were on-stream being ranked over players with similar results whose runs were off-stream.
  • Players who had a legacy of putting up better results than they did in the ranking season being ranked above players who had similar results during the ranking season.
  • Players who had strong results just before the ranking period or at tournaments that did not qualify for the ranking being ranked above players with similar results at tournaments that qualified for the ranking.
  • Players who achieved a high placement at a major with few strong wins due to a combination of DQs and upsets being ranked based on the placement number alone without considering the context of their bracket run.
  • Players with similar results being evaluated differently based on what region their results were in.

The most recent attempt at using a panel for a post-melee Smash game, 2022’s PGRUv3 notably suffered from all of these issues, only partially sidestepping region balancing issues thanks to segregating rankings by region. Largely as a result of this, the PGRUv3 panelists voted shortly after the dissolution of PGStats to make future rankings algorithmic rather than panel based.

But the SSBMRank has largely worked past the issues plaguing past panel rankings, so couldn’t LumiRank?

While it is true that there are things SSBMRank is doing that past Ultimate panels haven’t done but could, such as the use of a data sheet and lists of notable qualified events, as well as having a ranking period longer than three and a half months, the way Melee is structured is also substantially friendlier to a panel (and more hostile to an algorithm) than Ultimate is. 90% of Melee’s top 100 players are in North America, meaning panelists don’t have to compare players whose head to heads are against entirely non-overlapping sets of players. By comparison, North American and Japan each consistently account for 40–55% of the share of players in Ultimate. Balancing a panel region-wise in such circumstances is so difficult as to potentially be nigh-impossible; past panel rankings in post-Melee games faced with this difficulty have taken measures to restrict eligibility, favoring one region over another, as SSBBRank did by requiring players to enter a notable NA event to be eligible. Melee has far fewer notable tournaments than Ultimate does, having only 60 offline events with 100 or more entrants in 2023, including locals — while among ranked tournaments alone Ultimate had 409 such events. This paucity of data both makes it substantially harder to run an algorithm on Melee, but also makes it much easier for a panelist to parse the data without being overwhelmed.

Ultimate’s status as a scene whose strongest regions are split across continents and that has over a thousand relevant tournaments a year means processing all this data is far easier for an algorithm than a panel.

Part 2: A brief history of Algorithms in Smash

Algorithms have been used for global Smash rankings for years, both as fun hobby projects and for generally accepted community rankings.

A brief note on non-smash ranking systems

Super Smash Bros. Ultimate and the wider Smash scene in general do not exist in a vacuum, and it has often been suggested that rating systems used in other competitive games be applied to Smash, with the two most common being ELO and Tennis-Style rankings. Experience shows, however, that both of these systems have significant flaws that prevent them from being suitable for use as Smash Ultimate’s main ranking system.

ELO is probably the most common system used in hobbyist ranks, with a quick search of /r/smashbros revealing dozens of examples. ELO works exclusively off of head to heads between players, assigning a formulaic win probability for each match and adding and removing equal numbers of points from each player based on the result. Ambisinister of MeleeStats went over many of the problems with using ELO for Smash rankings years ago. In summary, ELO and systems based on it generally favor players with higher set counts, meaning that they overrate isolated regions with a very high event volume and overreward bracket runs made in losers rather than winners. Though Ether’s recent experimental ELO-based EtherRank didn’t display these issues to the extent of other past ELO projects, they were still very much present. Additionally, the time-linearity of ELO means that one cannot run an ELO system based solely on the data from a single segmented smash season (or at least, they can’t do that and expect the results to make sense); ELO has to use ratings from the previous season as initial conditions for the new one.

Tennis-style rankings are much the opposite of ELO and perhaps the simplest of all ranking systems: how a placement was achieved is irrelevant, only what you placed and what the value of the tournament was, with a players N most valuable placements over the course of a season used to determine the final standings. PracticalTAS (who would go on to make the PGR algorithm and run panel rankings for the PGR until PGStats’ dissolution in 2022) ran experimental tennis-style rankings for Smash 4 and Melee in 2016. Somewhat more recently, a tennis-style ranking was created for Smash Ultimate in 2019 (though I forget by whom, and the document doesn’t give any clues as to its creator). Both these experiments have issues with over rewarding peaks over consistency and the latter one has clear issues in handling isolated regions. Even more importantly, Tennis-style rankings necessarily ignore head to heads, which matters a lot in a game like Smash Ultimate, where a bracket run to 17th can often be more impressive than a run at the same event to 7th. Tennis-style rankings are suitable for use in purposes like circuits, where simplicity and ease of understandability are paramount, but are unsuitable for use in player rankings.

The Panda Global Rankings (2016–2020)

The first iteration of the Smash 4 Panda Global Rankings was released in mid-2016 and covered the time period from January 2015 through May 2016, with subsequent iterations through the end of 2019 each covering a roughly six month span of time, with the exception of the end of 2018 “PGR 100” covering January 2016 through November 2018. What we know of the methodology of the PGRv1 and PGRv2 is extremely vague to the point where it’s unclear the rankings were even algorithmic. PracticalTAS was brought on to design a new algorithm for the PGRv3, though the algorithm underwent substantial changes between the release of the PGRv3 and PGRv4 that meant it behaved dramatically differently in v4 onwards than in the initial season of its use. As such, this section will focus mainly on how the algorithm worked for the PGRv4, PGRv5, Spring 2019 PGRU, and Fall 2019 PGRU. Back in August I talked about the deficiencies of earlier PGR releases (as well as other rankings) on Tuesday Morning Mythra, especially with regard to Japan, so give that a read if you’re interested.

PracticalTAS was going to release a full design document back in January 2020 but in the course of writing that document decided to switch to a panel ranking moving forward, negating the need for the document. As such, there was no comprehensive description of its inner workings ever made. Even so, we have a rough understanding of how the algorithm worked based on various statements made on twitter, in the PGStats Discord (now the Ultimate Stats Discord), and in PGR FAQs. An initial list of qualified players was taken from tiered events; in some seasons this list was based on placements, in the fall 2019 season it was based on total overall set wins. These players would be given points against other qualified players based not only on their overall set records, but also each time they outplaced other players, and how much. Iteration would be used to determine the relative strength of each player to each other and then generate their score. This means that a player’s strength would be considered the same at all points during a season, and any outside data (such as their ranking prior to the season) was irrelevant. The PGRv5 and onward introduced a measure called confidence, which penalized players with a high rate of losses to unqualified players.

Strengths:

  • Iteration means wins on players whose results greatly went up during a season are properly rewarded. Mistake on the PGRv4, Zackray on the Spring 2019 PGRU, and Maister on the Fall 2019 PGRu are all good examples of this.
  • Use of outplacements rather than raw placements means that numerically high placements at top-heavy events aren’t overrewarded (for example, at major Thunder Smash 3, where every top 100 player placed in the top 16, a 17th placement wouldn’t be worth much).
  • Losers bracket runs were neither substantially overrewarded nor underrewarded compared to winners bracket runs.

Problems:

  • Although less biased towards attendance than prior rankings, still examples of players being placed low due to attending less, such as Choco being ranked outside of top 25 on the Fall 2019 PGRU despite strong placements and head to heads.
  • Regional balance issues heavily tied to event volume, with Japan underrepresented until the Fall 2019 iteration and Europe consistently underrepresented for the duration of the algorithm’s use.
  • Overrewarding of empty runs to good placements at stacked events. This problem was reduced but far from eliminated on the Fall 2019 iteration.
  • Overly stringent requirements for event qualification meant many impactful events were excluded. As the smashdata revolution only occurred in the twilight of the algorithm’s use, it is likely this would have been resolved had the algorithm continued past the Fall 2019 season.

OrionRank (2016–2022)

OrionRank was started by BarnardsLoop and EazyFreezie in 2016 with the goal of representing a more holistic view of the smash scene, using a much larger array of tournaments than the PGR and ranking twice as many players per release. OrionRank’s methodology was never properly publicly described but I was provided a summary during the course of the development of the LumiRank algorithm. Player value was determined based on an average of each qualified player’s weighted placements for the year, then scaled from 0–100. Players were given points for wins on qualified players equivalent to their player value multiplied by the weight of event, with points deducted for losses to less valuable players based on the difference in value between the loser and the winner multiplied by event weight. Placement points were also used but were worth much less than wins, especially at smaller events. Multipliers were used to increase point totals for players with low attendance and increase loss penalties for players with very high attendance.

Strengths:

  • Player worth being based on an average of placements rather than an additive formula avoids undervaluing of wins against low attendance players.
  • Very inclusive list of events used means missed notable events were rare.
  • Consistently very favorable towards more isolated regions with fewer opportunities to travel.

Problems:

  • Often went too far in the other direction, overvaluing performances in weaker regions and undervaluing major performances. Final version of the ranking improved on this issue but didn’t eliminate it, for example ranking Jdizzle 100th off in-region dominance despite an 0–10 record against the top 99.
  • Went too far in punishing players for inconsistency, for example ranking Goblin higher in Spring 2022 than in 2021 due to more inconsistent placements in 2021 despite having far better head to heads that year. This is also likely responsible for Japan being consistently underrated throughout OrionRank’s lifespan.
  • Placements as the sole factor for determining player value comes with the expected problems in valuing inconsistent players and those with empty bracket runs.
  • Despite inclusion of factors meant to mitigate the problem, still overly favorable to players with very high attendance.

ΩRank (2020–2022, with retroranks)

I started development on ΩRank in early 2020 following the announcement of the discontinuation of the PGR algorithm, with the initial goal of emulating that algorithm’s features, though I gradually added many wholly original innovations. I went in depth in that algorithm’s workings in a mid-2022 methodology post and further described the final spate of changes I made to it in the introduction post to its abortive full year 2022 release.

In summary: Like the PGR, player value was determined through iteration and is equivalent to each player’s overall score. That score was determined based on each player’s wins, losses, and who they outplaced and are outplaced by. Outplacements for a given tournament were scaled based on the peak wins a player obtained there, such that a worse placement with a strong win is more valuable than a better placement obtained through an easy bracket, eg through upsets and DQs. These sources of points were scaled based on event value, with wins being scaled to a lesser degree and losses to a greater degree, so as to incentivize regional attendance. Overall multipliers were used to reward lower attendance players based on consistently strong wins at each event and/or consistently good losses. Event value was determined entirely through retroactive retiering, using the iterated scores of the players in attendance.

Strengths:

  • Largely maintains strengths of the PGR algorithm.
  • Empty placements are handled appropriately.
  • Generally fair towards lower attendance players from established regions.
  • Avoids punishing players from less established regions for weak regionals if their major performances are strong.

Problems:

  • Over-punishes highly inconsistent players with very strong peaks (eg Takera, Toast, yonni, FutaKiwa in 2022)
  • Loss curve is too harsh for losses dramatically below a player, meaning bombing once can lower one’s score by more than is reasonable.
  • Over-rewards high attendance players who rarely bomb events but rarely beat players above them (eg Dabuz in full year 2022, Mr.R in mid-year 2022). This is a more specific problem than over-rewarding attendance in general, I think an apt name for it might be “existence-posting”: you’re not bombing, you’re not doing particularly well, you’re just going to things and existing at them.
  • Over-rewards single good wins, especially at smaller events (eg Ned, Mezcaul 2022)
  • Highly sensitive to event volume, with minor data disparities between regions having big impacts. See the decline in Japanese representation in full year 2022 compared to mid-year, need to use multipliers to achieve adequate Japanese representation in 2021 or first half 2019.
  • Overly harsh towards players from isolated regions with low event volume (eg ShinyMark not being top 150 in 2022)
  • Retroactive retiering makes it somewhat opaque to players what events are going to be valued by the algorithm before it happens.

EchoRank (2019–2024)

kenniky’s algorithm, EchoRank was created in 2019. Unlike other algorithms discussed here, EchoRank is experimental by nature, taking its many quirks as an acceptable part of the bargain for the benefits of its most important attribute: attendance agnosticism. After a player attends the small number of events required to be eligible for ranking, additional events will improve their rank only if those events improve their average. It’s not the only attendance agnostic algorithm but it’s the most prominent one by a good margin and other attendance agnostic algorithms tend to have fairly similar quirks to it, so EchoRank makes the most sense to discuss. Also unlike other heretofore discussed algorithms, EchoRank is time-linear: how much a win on a player is worth varies throughout a ranking period, which comes with benefits and difficulties. The way EchoRank works is it generates a score for each tournament a player attends and does a weighted average of all these events, with factors such as tournament tier used to determine the weight. A full explanation of EchoRank’s methodology can be found on its ranking spreadsheet.

Strengths:

  • Complete immunity to the “existence-posting” I described in the previous section, and to problems caused by high attendance players in general.
  • Relatively resistant to problems caused by regional data disparities.

Problems:

  • Highly sensitive to initial conditions; see the overvaluing of Georgia throughout early Ultimate, the undervaluing of Japan prior to 2023, or how quickly the problem flipped in 2023 to the point where Japanese events that other tiering systems classify as super-regionals are now classified by EchoRank as majors.
  • Overly punishing towards volatile players, and more generally volatile regions.
  • Can disincentivize regional attendance by lowering top players’ scores for winning events where they had no strong competition.

Part 3: The LumiRank Algorithm

Conceptual Development

The previously discussed algorithms all had their respective strengths and weaknesses. In mid-year 2022 an experiment called “RankRank” was created, which helped smooth out some of the odd edges in OrionRank, EchoRank, and ΩRank by averaging them together. When the need to hurriedly use a novel ranking methodology came in late 2022 with the dissolution of the PGR and formation of UltRank, RankRank was a natural candidate to use; however, combined with the quirks of the relatively brief ranking period we elected to use, the implementation of RankRank used in UltRank 2022 probably did more to accentuate the weaknesses of the respective algorithms than their strengths. As such, while in the aftermath of the 2022 release and the start of the 2023 season the LumiRank team was undecided on what algorithm to use going forward, we were in agreement that RankRank wouldn’t be coming back.

Early in 2023 separate experiments were made in creating refined versions of the ΩRank and OrionRank algorithms, but these experiments failed to satisfactorily deal with the respective problems of each algorithm. Amidst my own frustration with ΩRank’s deficiencies I observed that EchoRank’s strengths largely overlapped them and wondered if a satisfactory way to create a blending of the two methodologies could be found. The result, codenamed ΔRank, was promising in initial tests and was gradually refined to become the LumiRank algorithm used in the mid-year ranking. The algorithm underwent further refinements for the full year 2023 season.

Overview

For each tournament a qualified player attends, a tournament score is generated based on a weighted average of their win score, loss score, and positive and negative outplacement scores. A weighted average of all tournaments is then done to generate an overall score for the player, which is then scaled such that the #1 player has a score of 100 and the #50 player has a score of 50. This score is then iterated and the calculations re-ran until a finalized score for each player is obtained.

Wins

Wins are the foundational part of the algorithm; in addition to being used directly to determine the win score for each tournament, they’re also used as a component in determining the other scores. How much a win on some is worth is based on their score, with higher scores being worth increasingly more than smaller scores; for example, a win on a player with a score of 50 is worth significantly more than twice that of a player with a score of 25, and a win with a score of 100 (ie the #1 player) is worth far more than a win on a player with a score of 50. How much a win is worth is further adjusted for volatility (described below) and repeat matchups, with an individual win on someone you’ve played a lot worth less than a win on someone you’ve only played once. New to the full year 2023 version of the algorithm, wins are now adjusted based on your overall win-rate against a player, meaning a win on someone you’re, for example, 1–3 against adds less to your win score than a win on someone you’re 3–1 against. Also new in the full year version of the algorithm, players with consistent losses to players well below them now have reduced gains from wins. Lastly, wins now receive additional weight at a player’s best events and reduced weight at their worst events.

Losses

Losses function fairly similarly to wins but are somewhat more complicated. For starters, loss scores are capped based on the loser’s score; it’s not any more informative for a sub top 100 player to lose to a low top 20 player than it is for them to lose to a top 5 player, for example. Losing to a player one has many losses to lowers the overall weight of loss scores at that event. Loss scores are tilted towards the worst loss a player has at an event, meaning that if you lose to a top 20 player and a player ranked around 80th, the loss to the second player will form a larger component of the score. If a player loses to someone who has not qualified for the ranking, a score will be approximated based on the winning player’s overall set win rate (for the curious, the screenshot at the top of this article is of the part of the algorithm sheet used to determine this). If a player wins a tournament, a set value is added to their loss score based on how stacked the event was, with a larger value added if they won without dropping a set. The loss score will be lowered somewhat for players with dramatically better losses than they have wins. Losses receive a lower weight for players with a lower score, and a higher weight for players with a higher score. Compared to the mid-year version of the algorithm, losses have lower direct weight in the full year version of the algorithm, but now have a more direct role in determining the overall weight an event gets for a player, detailed in the “Weighting” section.

Volatility

Competitive Ultimate players exist on a continuum between two extremes: players who can beat anyone and lose to anyone, and gatekeepers who get gatekept themselves, rarely losing to players ranked well below them but rarely upsetting players above them. We consider upsetting someone who never gets upset to be more impressive than upsetting someone who gets upset frequently, and simultaneously an upset to be less surprising if it’s by a player who has a history of them; as such, given a similar score between two players, a win on a less volatile player is slightly more valuable, while it’s slightly better to lose to a highly volatile player than to a less volatile one. Volatility is calculated based on the difference between the average score of a player’s 90th percentile and above qualified wins, and the average score of their 10th percentile and below losses.

Outplacements

Outplacements come in two flavors: positive outplacements (who you outplace at an event) and negative outplacements (who you’re outplaced by). Positive outplacements receive a substantially greater weight. Both types are based on the number of losers bracket rounds at an event, meaning if you outplaced someone and you placed 9th, you’ll get more for outplacing them if they placed 25th than if they placed 13th. Positive outplacements are scaled based on the best wins a player receives at an event, such that a high placement with few good wins will generate a lower outplacement score than a lower placement with very strong wins. Placement scores may be lowered further if an event has dramatically worse wins than their score would indicate. Placement scores are weighted higher for players lower on the ranking and lower for players higher on it.

Weighting

After the score of a particular bracket run has been determined based on the above components, the next step is determining the weight it receives in the overall score for the player. This is the most complex part of the algorithm, with many different factors in play:

  • Base weight. This is determined by a simple average of the event’s value on the tournament tier system and a live-calculated value that accounts for the algorithm’s calculated scores for all qualified players in attendance (meaning that players like Eik, who were not worth any points on the TTS for 2023 despite the algorithm giving them a relatively high score, still added to the weight for any events they attended.
  • Whether or not a player had anything to gain from the event. If a player wins an event with no sets dropped and yet it still falls below their average score, then its weight is set to zero, removing it from their average entirely. If they did not do this, but the event still would have fallen below their average score if they had done so, then the weight of the event is significantly reduced. This is to avoid punishing top players for attending small regionals.
  • A combination of a player’s overall weighted attendance and where the event ranks among the player’s best and worst bracket runs of the season. Higher weighted attendance places higher weight on their peaks, while lower weighted attendance places higher weight on their weaker performances.
  • If the event is an outlier event for the player in either direction, being either several times better for them than their next best event or several times worse than their next worst event. These events have their weight reduced, unless the event is an outlier peak and the player attended no other events of a similar size (to avoid punishing players whose ability to travel was limited).
  • New in the full year 2023 version of the algorithm, events now have their weight heavily reduced if a player’s losses were dramatically better than their wins. This has the effect of deweighting both events where a player coasted to a high placement due to DQs and upsets and promptly lost to highly ranked players, as well as events where a bad bracket meant a player went out very early in losers to highly ranked players with little opportunity to beat anyone.

Low attendance penalties

In order to avoid over-rewarding players with very low attendance (particularly at majors), low attendance players’ ranking scores are lowered, with the required attendance to avoid penalties increasing with score. This penalty only affects the score used for ranking players, and does not affect the value of head to heads against them. Honorable mentions are players who, at the end of a season, fall below the required threshold of attendance; they may or may not also have low attendance penalties. Whether or not they are on the honorable mention list is based on whether or not the difference between their ranking score and algorithm score falls above a certain threshold, but, if they do qualify for the honorable mention list, their algorithm score rather than their ranking score is used to determine their values. End of the year regional rankings have a substantially lower attendance requirement, and players who failed to meet attendance for the main ranking have an average of their ranking score and algorithm score used to determine their placement on the regional ranking. Players who only attended one tournament during a season are ineligible for honorable mention status or inclusion on regional rankings.

Part 4: Comparative outputs in previous seasons

‘In the run-up to the creation of this article, I ran an unmodified version of the LumiRank Full Year 2023 algorithm on full year seasons for 2019 and 2022. A few general notes:

  • These are not formal “LumiRank Retroranks” and are not the same as if the entire LumiRank methodology had been applied to these years, something we may do in the future. Notably, though each ranking uses several hundred tournaments that should encompass all those that would significantly impact the ranking on their own, for the 2019 ranking these tournaments were chosen arbitrarily rather than using a TTS, and the 2022 ranking uses ΩRank’s 2022 tts (more restrictive than LumiRank’s). Some events may have been missed, and others that were excluded due to having little individual impact on the ranking would have been included under the full LumiRank methodology and while on their own each has little impact the collective impact of their exclusions may be significant.
  • Several players who were actively competing during these timeframes have since been permanently banned, particularly in the 2019 ranking. As the primary purpose of these rankings is as a tool of comparison with contemporaneous rankings, I elected to remove a player from the list only if they were banned during the ranking season and thus also excluded from most contemporary rankings.
  • In accordance with tradition, the last two weeks of December are included in the following year. This means that the 2019 ranking counts multiple significant 2018 events, including Let’s Make Moves, NYXL Pop-Up, and Super Splat Bros. Other rankings may use ranking periods which exclude these events and/or include end of 2019 ones, which may muddy the waters for comparison somewhat.

Without further ado, the rankings:

2019

For comparison, OrionRank 2019, EchoRank 2019, and the final version of ΩRank’s algorithm on the same dataset.

Notes:

  • Japanese representation is greatly improved, with 37 in the top 100 compared to 24 on OrionRank, 29 on ΩRank, and a mere 17 on EchoRank.
  • The algorithm is fairly generous to Europe, putting 4 Europeans in top 50 and 6 in top 100, though not quite as generous as OrionRank was with its 5 Europeans in top 50 and 15 in top 100.
  • Compared to other algorithms, the LumiRank algorithm is significantly more generous to inconsistent players with strong peaks, such as ZD and Ri-ma.
  • High attendance players without strong peaks, such as moxi, Suarez, and Mr. E, tend to be significantly lower.

2022

For comparison, OrionRank 2022, EchoRank 2022, and ΩRank 2022.

Notes:

  • Sparg0’s significantly higher consistency wins out against MkLeo’s higher peaks to land him the #1 spot.
  • Once again the LumiRank algorithm is substantially more generous towards Japan than prior algorithms, putting 44 of their players in top 100 (assuming I didn’t miscount), compared to 36 for ΩRank, 34 for OrionRank, and a mere 25 for EchoRank. Interestingly, this time two JP players at least one of the other algorithms had in top 100 miss on the LumiRank algorithm however, in Yamanaction and Toura. Zaki ends up oddly high.
  • The algorithm’s roughly as generous to European top level as Orionrank but is a bit less generous to the tier below that and the tier below that, with one fewer European in top 100 overall.
  • Tarik being the 11th best European seems suspect to me; I know why the algorithm’s putting him there (he didn’t do anything at majors) but it indicates there’s probably still some work to be done in handling strong regional results. A few of the North Americans like Scend seem unusually low for similar reasons.
  • Once again, the LumiRank algorithm is substantially more generous to inconsistent players than prior algorithms, ranking Toast, Takera, and Futari no Kiwami Ah~! in the top 100 when all three missed top 150 on OrionRank, all but Toast missed top 100 on both the other two 2022 rankings (Toast was 81st on EchoRank), and FutaKiwa missed top 150 on all three algorithms.
  • Kaninabe and Lv.1 both had very similar 2022s to Jahzz0’s 2023, I think the algorithm probably likes that kind of season a bit more than it should.
  • Especially compared to ΩRank, the LumiRank algorithm is much less favorable to inconsistent players with one or two funny wins, with Larry Lurr and Mezcaul both missing top 100.

Closing

Uhh I’m not really sure how to close this, I hope this gave you all a better understanding of how the algorithm works and why it works that way. Feel free to reach out to me if you have any questions. Enjoy the ranking’s release!

--

--