The Cost Of Bad Drafting

NBA teams could be, on average, about $90 million “richer” if they were better at picking players.

“With the 1st pick in the 2007 NBA Draft, the Portland Trailblazers select Michael Conley from Fayettville, Arkansas, and Ohio State University.”

This is how former hedge fund manager Philip Maymin, Ph.D., thinks the 2007 NBA Draft should’ve gone. Instead, the Trail Blazers selected Conley’s Buckeye teammate Greg Oden. According to a paper Maymin published in May, in which he measures the cost of bad picks against his model, that decision cost Portland around $7.4 million in “lost wins” for the first three years of Oden’s career.

That wasn’t even one of the costliest high-lottery picks of recent vintage. Drafting O.J. Mayo with the No. 3 pick of the 2008 NBA Draft cost the Grizzlies $13.1 million in total wins produced. With the second overall pick in 2009, Hasheem Thabeet, Memphis let another $8.7 million slip away. In total, looking at the 2003-2013 drafts, the Grizzlies are $222.5 million in win value “poor” than if they’d used a draft algorithm Maymin created.

According to Maymin’s paper, the Grizzlies are among 13 teams who have missed out on more than $100 million in win value, and another 14 forfeited at least $20 million over the past decade. Only two teams — Chicago and New Orleans — beat Maymin’s model over that time span.

About a decade ago, Maymin was a hedge fund portfolio manager moonlighting as a New Jersey Nets beat writer for Basketball News Services. He earned his Ph.D. and became an assistant professor of finance and risk engineering at NYU Polytechnic School of Engineering before he and a group of friends, including his brother, wrote their first NBA paper. By 2011 they were presenting at the MIT Sloan Sports Analytics Conference and consulting with NBA teams.

While teams are considering analytics on draft night, they are still relying too heavily on executive analysis and opinion, he said during a phone call earlier this month, echoing the words of this year’s Sloan panelists.

“Overall decision making is still a black box — it’s the head of the general manager, it’s the brain of the people that work in the front office,” Maymin explained. “They take all these inputs, scouting reports, what they see with their own eyes … and they throw it into their brain and they crunch it around.”

Maymin’s algorithm uses statistics from a player’s most recent year of NCAA performance — including jumping ability, rebounds, and a dozen or so others — to project how he’ll perform in the NBA, then uses this data to project who each team should draft. In his paper, Maymin compared the model’s draft picks to teams’ actual draft picks for the 2003-2013 drafts. The paper puts a monetary value on the decisions by looking at Wins Produced during the first three years of a player’s NBA career.

The website values NBA wins at $1.65 million each.

Now, there are some caveats here:

  1. Maymin’s model only accounts for the first three years of a player’s pro career. As FiveThiryEight’s Nate Silver pointed out in May, the fourth year of a player’s career is often critical, and most players peak around their fifth year. The amounts “lost” might be lower if looking at a timeframe longer than three years, as most teams do on draft night.
  2. Losses in the model can accumulate because more than one team can miss out on the same player who the model projects to do well, like Kawhi Leonard or Kenneth Faried. “I’m not trying to predict what would have happened if every team had followed the model,” Maymin explained. “I’m not interested in making the NBA as a league more efficient. The goal is a championship [for one team].”
  3. The model doesn’t currently account for high school (not eligible anymore) or international players, but Maymin says the projected losses would likely be higher if they were included.
  4. Only looking at Wins Produced leaves out overall team fit, chemistry, and the team’s timeline for the pick to come good. Maymin is no stranger to team chemistry, though. He and a group of colleagues analyzed it in a previous paper that they presented at Sloan in 2012. It’s a factor, he said, but the impact is secondary.
  5. Also, some analysts criticize WP for undervaluing individual defense and overvaluing rebounding. Analyzing NBA performance with a different metric could show that teams’ draft picks were better or more valuable than the model indicates.

That said, the model appears to have significantly outperformed the NBA as a whole over the past 11 drafts. While teams have generally done about as well as the model from picks 1-5 and 46-60 overall (humans actually have outperformed the computer with their top-five picks), they have foregone tremendous value from picks six through 45.

In 26 of the 60 overall draft slot cases, the model has done “significantly better” than the humans picking in those spots, and it has been better than humans in 47 of them. In only one of the 60 slots has the model been “significantly worse,” according to Maymin.

Courtesy Philip Maymin

While in 13 of the 60 slots, the teams’ composite draft choices performed better than the model, only two actual teams have trumped the algorithm.

Chicago beat the model in 2003, 2004, 2006, 2007, 2008, 2009, and 2011 — more than any other team — for a total of $85.5 million over the decade. New Orleans was the only other team to beat it overall. The Hornets (now Pelicans) were helped by making the right gamble in 2004 with Chris Paul. The model had suggested Sean May, who turned into a nondescript NBA player.

Other teams struck gold in spots, as well. The model had Derrick Williams going 1st in 2011. Instead, Cleveland went with Kyrie Irving, a decision that netted them more than $5 million worth of extra wins each year. In 2010, the model suggested that Hassan Whiteside should’ve gone as high as 16th. Whiteside did actually produce more wins than the 16th pick that year (Luke Babbitt, Portland), but more than half of the picks between Whiteside and Babbitt also outperformed Whiteside. That same year, the model had Jarvis Varnado as high as 22. Varnado went 47th and produced -0.07 wins. Perhaps the most eggrarious model mistake was May, who it wanted as high as 2nd overall in 2005.

This serves as a reminder that NCAA performance isn’t a perfect indicator of subsequent NBA success. Recall the 2007 NBA Draft, when the model wanted Conley to go first. Conley did outperform Oden in college, but that was in large part due to a hand injury the model didn’t adjust for. At No. 2, though, Oklahoma City was smart to choose Kevin Durant over Conley. Picks 9, 22, 24, 31, and 48 also produced more wins than Conley. Even the 56th pick, Ramon Sessions, nearly outperformed Conley — indicating that both the GMs and the model could’ve done better jobs that year.

This is just one model; teams are already considering dozens of others. And as all teams develop better analytical strategies, it will only become that much harder to “beat the market” in the long term. (538's Neil Paine explained this in regards to the NFL Draft in May.)

Maybe the answer in a perfect world is to combine analytics and opinion, or use multiple algorithms, as Sacramento attempted to do this year. But as their final approach proved, the Kings — like many other teams — haven’t quite figured out what model is the best At the end of the day, it’s still a GM deciding what is best. Human bias will never be completely eliminated.

Regardless of whether teams use Maymin’s model or another, Maymin’s paper makes one thing clear: most teams are leaving something on the table. And if you believe his work? That give-up runs well into nine figures.

Poor Memphis. At least they didn’t have their 2003 pick to waste it on Darko.