Titans of the Draft: Assessing NBA Teams’ Drafting Ability

Zawwar Naseer
7 min readDec 4, 2023

--

Update: Since publishing this article and receiving feedback I’ve realized that some of the initial assumptions I made are skewing the model too much. Specifically, the idea of grouping players into buckets instead of assigning specific values as well as adjusting for historical draft positioning instead of comparing to available players in the current draft are significant oversights. This model is superior.

For those who just want to see the final results table, here it is. My process and methodology for getting these results are explained below and the raw data can be viewed here.

Abstract

I believe that if you got every president of basketball operations/GM for an NBA franchise together in a room, gave them all truth serum, and asked them what the hardest part of their job is, they would say a lack of time. The average tenure for an NBA general manager is 30 months, just under 3 seasons. Considering that player development, salary cap reshuffling and trades often take multiple seasons to reveal themselves with clarity, this is quite a short timeline; but an understandable one. Unlike other businesses, sports is uniquely a zero-sum game. The amount of “wealth” in the total system cannot increase over time and there’s only one winner every year.

My thesis is that this zero-sum dynamic creates what is known as “asymmetric risk”, where the negative consequences of wrong decisions are significantly greater than the positive outcomes of right decisions. In the financial world, this is generally a well-understood concept and as a result, the leading money managers in the world rely on a strategy of creating a well-defined thesis that can be proven wrong or right and using algorithms to determine the outcome based on that logic; commonly known as “quantitative investing”.

But unlike making money in the market, sports are an inherently emotional endeavor. The emotional investment in sports arises from various sources: the thrill of victory, the agony of defeat, the shared experiences among fans, the personal connections to teams and athletes, etc. My goal is to, over time, build an operating system that isn’t susceptible to this personal and emotional bias.

It seems the most clear way to pick a winning strategy would be to find which GMs are currently doing the best job, find the universally common themes amongst them and build a system around that. However, evaluating the performance of an NBA GM is not straightforward. Lots of factors like player injuries, coaching, financial constraints and ownership decisions can drastically influence a team’s success or failure that lie beyond the control of the GM. These variables create a landscape where the true impact of a GM’s decisions can be obscured by circumstances beyond their control.

To assess a GM’s performance more accurately, I would instead consider the concept of ‘alpha’ — a term again borrowed from finance, representing the value a GM adds above the market average independent of these external factors. The question then becomes: How can we isolate and evaluate the alpha generated by a GM?

One approach is to examine the NBA draft. The draft represents a nearly controlled experiment, offering near-perfect information and a level playing field. It’s here that a GM’s acumen, foresight, and decision-making skills are most transparently tested.

To understand what separates the best GMs from the rest, a two-pronged analysis is most beneficial. First, studying the processes and habits of the most successful GMs can reveal patterns and strategies that lead to success. This involves not just their draft picks, but also their approaches to team building, talent development, and adaptability to changing game dynamics. Second, analyzing the habits and decisions of less successful GMs is equally important. Identifying common pitfalls and mistakes can provide valuable lessons on what to avoid.

Thus I have set out to collect and evaluate data to measure each team’s generated alpha in the draft and evaluate them accordingly. The following is my methodology for going about this.

Methodology

The methodology for measuring the success of a single draft pick is based on the player’s performance adjusted for where they were taken in the draft. The model takes into account every draft pick made since 2016. We had to pick a year to cut off the data and 2016 seems reasonable because that’s approximately when modern basketball began to take shape in terms of 3pt shooting, spacing, switching defense, etc

Measuring Player Performance

Traditional models often use advanced metrics like Player Efficiency Rating (PER) or LEBRON to gauge a player’s value. However, I believe that these metrics alone are too limiting. They often miss context and overall, it's almost impossible to capture an entire player’s value with one number. Instead, I opted for a points system that uses end-of-season awards voted on by the media to determine what category bucket each player falls into. This is the multiplying factor (M) and it is categorized as the following:

1 = The player signed a second contract in the NBA with at least 50% in guaranteed money

4 = The player made an All-Rookie team and signed a contract extension with their original team

12 = The player has made an All-Star team

17.15 = The player has made an All-NBA team

These rules create a structure for the model but I was able to adjust for clear exceptions. For example, Micheal Porter Jr. has never made an All-Star team and wasn’t selected to an All-Rookie team so the model originally gave him a score of 1. However, he is clearly a much better player than that and on par with all other players who were on the All-Rookie team for this draft class. As such, I have overridden the model to give him a score of 4. There were a handful of these expectations throughout.

The scores themselves were decided by the percentage of players in the NBA that generally fit into each of the preceding buckets. For example, Since 2016 of all the players to make the All-Rookie team, ⅓ of them have become All-Stars. Thus the model weights a player who made an All-Star team as 3x as valuable as an All-Rookie Player. A similar calculation was done for each of the other buckets.

Accounting for Draft Position

We also need to account for where in the draft said player was taken. While traditionally higher picks are viewed as more valuable, our purposes are focused more on the team's ability to assess talent rather than the players themselves. As such, we have built the formula to reward teams for picking good players later in the draft.

The actual draft function is based on previously published work that looks at the chances of drafting an all-star at each pick. The graph below depicts this.

Logically, the chances of selecting an All-Star decrease after every pick however the exact function with which it decreases is represented as P(x) where x represents the draft pick #. Based on the data:

The Process

The draft score formula is used to compute the scores for every pick. The draft scores for every pick since 2016 are then averaged out for each team to get their final score.

To better simulate the real draft, we take a weighted average; distinguishing between picks made from 1–35 and 35–60. Taking an overall average generally weighs second-round picks too highly. Through experimentation, we have found that an 85–15% round weighting works best.

Undrafted players are also accounted for in the models. These players present a unique challenge because there is more player control for picks in the later second round which can obscure a team's drafting ability. A prominent recent example of this is Austin Reeves. Austin was slated to be selected 42 overall by the Pistons in 2021 however informed them we would rather go undrafted to get to choose which team he signed a two-way contract for. While front offices should be rewarded for finding gems who are undrafted, in terms of assessing talent evaluation skills, not having the confidence or ability to draft them should be taken into account as well. Additionally, we can’t take into account all undrafted players who are given a chance but don’t hit. Accordingly, undrafted players with scores above 0 are put into the model as being drafted with the 45th pick. The same has been done with all other second-round picks after 35 as generally these picks are viewed as the same value. The final results are posted in the table at the top.

I plan to use these results going forward as a baseline to learn and test my own drafting abilities.

--

--

Zawwar Naseer

Building a live journal quantifying my process of becoming an NBA decision maker