Can machine learning help improve your fantasy football draft?

Comparison of Fantasy Outliers’ yearly models’ 2017 fantasy football draft performance versus ESPN and Expert Consensus Rankings

Chris Seal
Fantasy Outliers
9 min readAug 7, 2018

--

Our 2018 Fantasy Football Season Projections are now available for FREE in this Glorious Google Sheet and this Interactive chart.

Last week, we release three articles describing how our weekly projections beat ESPN’s last year in many ways. (If you’re interested, a good starting point is the summary of results). Today, we will outline how our yearly models’ compared to ESPN’s yearly point projections and pre-season Expert Consensus Rankings.

If you’re not familiar with Fantasy Outliers, we’re a small group of fantasy football aficionados and a data scientist (why hello there) who do this work because…

We believe that human expertise combined with machine learning is better than either by itself.

Now, let me first say that we did not document our hand adjustments to these yearly projections last year. So the following analysis will compare our unadjusted models outputs to ESPN and expert consensus. Obviously, a lot can happen in an NFL offseason, and it’s hard for any one model to account for all of the changes. So in practice, one could reasonably expect the hand-adjusted results to be better.

Also, because we are limited by a small sample size, it’s hard to say things like: “Yes, with 99% confidence our Top 10 projections for running back were better than X.” One’s success or failure in a draft largely depends on a few individual choices, and let’s be honest, a fair amount of luck. So unfortunately, the results of this comparison are less clear-cut than the results of our weekly model comparison.

With those disclaimers in mind, below, we’ll take a look at:

  • how our models’ Total Points projections and Overall Ranks compared to ESPN’s, and then,
  • compare our Within Position Rankings versus ESPN and Expert Consensus Rankings (ECR).

Summary of Results — Fantasy Outliers’ 2017 pre-season draft projections versus ESPN

The basic results are (without any hand adjustments, mind you):

  1. Our overall rankings for QB and RB were more accurate than ESPN’s, and are good directional indicators for all positions
  2. Our within position rankings were better for quarterback, and though, it was a statistical toss-up, we had a few really good value finds for other positions (Mark Ingram, Russell Wilson, Zach Ertz, etc.).

Litmus test: Comparing our fantasy football models to actual values

We have yearly models for: Opportunities per Game, Points per Opportunity, Number of Games Played, and Week-to-Week Variance.

We then calculated Points per Game like so:

Points per Game = Opportunities per Game * Points per Opportunity

And finally, we used this equation to calculate Total Points in a year:

Total Points = Points per Game * Number of Games Played

The reason we parsed out our models like this is if you disagree with a prediction, you can adjust one component of it and keep the rest in tact.

The following charts compare our Points per Game (PPR) and Games Played predictions to the actual values, respectively. You can see that both models were better than chance (i.e. low ‘p’ values). The math also says that:

  • Our Projected PPR Points per Game accounted for 66% of the variance in Actual PPR Points Per Game (the corresponding number in standard leagues was 75%)
  • Our Projected Number of Games Played models predicted 27% of variance in the Actual Number of Games Played.

In other words, the models were not total crap!

Predicted vs. Actual scatterplots for Points per Game (PPR) (left) and Games Played (right). Data is filtered on players who played at least 3 games in Weeks 1-15 of the 2017 NFL Season

Comparing Fantasy Outliers’ total points projections to ESPN’s

Okay, you might be saying, this is interesting, but how did your projections compare to other people’s projections? In this section, we will attempt to answer the general question — were Fantasy Outliers’ yearly points projections better than ESPN’s?

We limited the data for comparisons to Weeks 1–15 of 2017 (making the corresponding adjustments to the respective datasets). Furthermore, we only looked at players with at least two years of NFL experience. (Our models had a known issue with rookies and second year players last year that we will correct going forward, and assess in the next offseason.)

In order to include a high enough sample size for a mathematical analysis, we included Top 50 rated players at each respective position. We ignored ties and eliminated players who got injured before the season.

Here we see Fantasy Outliers’ Total Points winning percentage versus ESPN’s Total Points projections.

  • The Better Ranking columns mean that Fantasy Outliers’ resulting rankings were closer than ESPN’s to the actual rankings, calculated at the end of Week 15 last year.
  • The Directionally Correct columns mean that Fantasy Outliers’ yearly projections were directionally accurate relative to ESPN’s, but not necessarily closer. In other words, if we rated a player higher than ESPN, and at the end of the season, that player was, indeed, ranked higher than ESPN’s initial projection, then we were ‘directionally accurate’.

Winning rates versus ESPN:

Fantasy Outliers’ Total Points projections win rate versus ESPN. Better Ranking: The number of times our yearly point projections were closer to the actual value than ESPN’s. Directionally Correct: Fantasy Outliers’ yearly models winning percentage when used as a directional indicator relative to ESPN’s yearly projections. Data is filtered on Top 50 rated players in their position with at least two years of NFL experience going into the season.

As you can see, most of our win rates versus ESPN are above 50%.

The only exceptions are our absolute rankings for Tight Ends, which the win rates are hovering just below 50%.

But this still doesn’t answer our question, were our models better than ESPN’s statistically? Here we used beta distributions, a probabilistic method commonly used in A/B testing, to calculate the probability that ESPN’s models were better (Better Ranking columns) or that Fantasy Outliers’ models directional accuracy relative to ESPN’s projections was worse than chance (Directionally Correct columns). As you can see from the following graph, most of the probabilities are pretty low — close to zero.

Odds Fantasy Outliers’ Total Points projections are worse than ESPN’s. For Better Ranking columns, numbers represent the probability that ESPN yearly point projections are better than Fantasy Outliers’ yearly projections. For Directionally Correct columns, the numbers represent the probability that our directional performance is worse than chance. Low numbers (dark blue) are better for us/Fantasy Outliers.

From this graph, math would tell us…

Our yearly total points projections are probably closer to actual points scored than ESPN’s projections for quarterbacks and running backs

Also, math would say:

Our yearly total points projections are probably directionally accurate relative to ESPN’s projections for all positions and scoring formats, except WR in Standard leagues.

Taking this a step further, we compared Fantasy Outliers’ and ESPN’s average total point projections for Top 30 players to the actual total points scored. As you can see, when taken as a whole, our total season points projections were closer to the the actual values. ESPN’s seem to be over-predicting, maybe because they aren’t accounting for injury risk?

Average Total Points projections compared to actual values for Top 30 players going into the season at each respective position

After seeing the above graph, we wanted to check to see our projections were still helpful if you remove our Games Played projections. To do this, we looked at our value-based projections that compare a player’s projected Points per Game to that of their position group, and thus, do not take into account injury risk. We compared the resulting overall rankings to ESPN’s overall rankings, as they related to end-of-season total points scored.

As you can see, our winning rates went down a bit, but our value-based projections were still very good directional indicators (righthand side of graph):

Fantasy Outliers’ value-based overall rankings win rate versus ESPN.

And in case you’re curious, here are the probabilities that our models are worse than ESPN (Better Ranking columns) and the probability that our models are worse than chance (Directionally Correct columns):

Odds Fantasy Outliers’ value-based overall rankings are worse than ESPN’s.

Keep in mind all of these results are without making hand-adjustments to the models, which if done strategically, should improve the results.

Okay, okay, I hear you. You’re saying, “Chris, this is all nice and good, but this is too hypothetical, I want to know how you did with individuals at the top of the draft.”

Fantasy Outliers’ positional draft ranking differences vs. ESPN and Expert Consensus Rankings

Here, we try to determine who was less (or more) wrong going into the season. We looked at the end-of-season Top 18 players for each position, leaving out those who had at less than two years of NFL experience. We then compared Fantasy Outliers’ yearly pre-season rankings by position to ESPN’s and ECR/Expert Consensus Rankings, respectively.

In the following graphs:

  • negative numbers (blue) show the extent to which we were less wrong than ESPN (or ECR), whereas
  • positive numbers (red) show the extent to which we were more wrong than ESPN (or ECR).

We limit our discussion here to Standard scoring leagues. Also, keep in mind the small sample sizes.

Our within position rankings for quarterback were very good, but for other positions, it was a toss up

Quarterbacks:

As was the case with our weekly models, we did best predicting quarterback performance. Our models had wins for Russell Wilson, Alex Smith, and Blake Bortles; and a big whiff on Cam Newton. I included Russell Wilson here, because he was by far our highest-rated QB going into the season and indeed, he ended the season at #1.

Our unadjusted preseason model projections were tied or more accurate than ESPN for 11 of 14 and ECR 11 of 15 top QBs last season with 2+ years of experience.

QB: Relative performance of Fantasy Outliers’ yearly rankings by position errors versus that of ESPN’s (left) and ECR (right). Negative/blue values mean we were less wrong, while positive/red values mean we were more wrong.

Wide Receiver (Standard):

Our big oversight for wide receivers was Keenan Allen, with A.J. Green and Alshon Jeffrey a close second. That said, our yearly total point projections had relative successes in the bottom half of this graph, especially for Adam Thielen, Jarvis Landry, and Nelson Agholor.

WR Standard: Relative performance of Fantasy Outliers’ yearly models errors versus that of ESPN’s (left) and ECR (right). Negative/blue values mean we were less wrong, while positive/red values mean we were more wrong.

Running backs (Standard):

Onto running backs. Our models totally missed Todd Gurley. There was so much change in the off-season for the Rams — especially new members of the receiving core and offensive line — that opened up a lot of things for Gurley, and our models sadly couldn’t keep up with it. That said, our big hit on Mark Ingram helped a lot of people win their leagues last year.

RB Standard: Relative performance of Fantasy Outliers’ yearly models errors versus that of ESPN’s (left) and ECR (right). Negative/blue values mean we were less wrong, while positive/red values mean we were more wrong.

Tight Ends (Standard):

We missed on Rob Gronkowski and Vernon Davis, but hit big on Zach Ertz — a great later-round find!

TE Standard: Relative performance of Fantasy Outliers’ yearly models errors versus that of ESPN’s (left) and ECR (right). Negative/blue values mean we were less wrong, while positive/red values mean we were more wrong.

Overall, we found that our yearly models showed promising results in the 2017 NFL season:

  • Our total points projections and overall rankings were better than ESPN’s for QB and RB and were good directional indicators for all positions
  • Our within position rankings for quarterback appear to be better than ESPN and ECR, but the within position rankings for other positions (RB, WR, TE) were a toss up

Although we totally whiffed on some top performers, the players we did hit on helped people win their leagues (Mark Ingram, Russell Wilson, Zach Ertz, etc.).

Also, I should reiterate that this analysis did not incorporate any human adjustment to our models, which if done strategically, would likely improve results to one degree or another. Going forward, we can only suggest you use the best human expertise available, and let Fantasy Outliers’ projections tip you off to some potential value finds.

To stay in touch, please, join us by following us on Twitter (@fantasyoutliers) or subscribing to our weekly newsletter. We’re a small team, so let’s grow together!

--

--

Chris Seal
Fantasy Outliers

Chief Data Scientist at Whitetower Capital Management; Co-Founder, Lead Data Scientist at Fantasy Outliers