2017 NFL Season Backtest— GridironAI Rankings vs ESPN Expert Rankings

Andrew Troiano
Sep 4, 2018 · 8 min read

It’s been a while, everyone! Our goal this summer was to frequently post articles as we continued developing our platform, but time got in the way. We’re hoping to get back on track and post regularly!


Before I dive in, there are a few bits of housekeeping I want to attend to.

  1. We’ve launched an updated version of our site! Registration is live and we’d love to hear your thoughts! 👏 👏 👏
  2. While the promotional website is finished, we’re still working through a few kinks on the web app that will display our rankings each week. The full site will launch 9/24/2018, just after week 2.
  3. We’ve decided to post our weeks 1 & 2 fantasy rankings on our blog, for free. Check back at the start of each week to see what our models are saying. If you register on gridironai.com, we’ll send you an email each week when we post our predictions.
  4. Everyone who registers right now will become Early Adopters, which gets you 60% off subscriptions for the 2018 NFL season. We also mark Early Adopters in our database and will continue to give them special offers and treatment in the future.

Phew, back to the post!

We’ve spent countless hours the last year with our heads down tuning our models and in the last few months we’ve started getting good results. That brought up the question: how good are we compared to the experts at ESPN?

In this article, I’ll share the results of my 2017 NFL simulation and how our rankings compared to ESPN experts. The goal is to show you a glimpse into how our models would have performed, if used last year.

Setting up the simulation

This simulation looks at weekly rankings for the top 30 fantasy players at each position. I compared ESPN expert rankings and Gridiron AI rankings to the players actual rankings for each week. Importantly, no data from the 2017 season was used to train our models or make predictions about the 2017 season.

For each week of the 2017 NFL season, I compared two metrics. The first is the sum of absolute errors and the second is the count of exactly correct rankings.

Let’s say on a given week ESPN experts predicted Kareem Hunt to rank 20th, Gridiron AI predicted him to rank 3rd, and his actual fantasy rank was 6th. In that case, Gridiron AI would get 3 error points (6–3) and ESPN would get 17 (20–3). A lower error equals more accurate predictions.

Sum of absolute errors is a good metric to compare by because it tells you how good the rankings are overall. It still rewards a ranking if it’s close, but not exact. The drawback of this metric is that when you’re deciding which player to bench or who to pickup, a ranking off by 3 or 4 spots can make a big difference. If a ranking system averages only being off by a few spots per player, it will score well at this metric, but it could be the difference between playing the right player or dropping the wrong one. That’s where my second metric comes in.

I also compared the number of players whose ranks were projected exactly correct (If projection = actual then you get a point). If for a given week ESPN experts predicted Todd Gurley to rank 1st, Gridiron AI predicted he would rank 3rd, and his actual fantasy ranking was 3rd, ESPN would get zero points and Gridiron AI would get 1 point. A higher point count equals more accurate predictions.

This is an important metric to look at because most fantasy decisions you make during the season — who you’re playing, benching, picking up, trading, etc. — are sensitive to being off by even just a few spots.

*A future comparison metric we want to look at is a combination of the two in this article. We want to set a bound, say within 1 ranking spot, and calculate a point total based on ranking systems being either perfect for a player (1 point) or within that bound we set (maybe 0.5ish points). This way exact correct rankings are rewarded, nearly correct rankings are still given some credit, but rankings off by 3 or 4 spots are not falsely rewarded.

My two metrics were calculated for each of the top 30 players at each position and for each week of the 2017 NFL season. Here’s what the results showed:

Gridiron AI is ~1% better based on the absolute error metric

Gridiron AI is ~27% better based on the count of exact correct rankings metric.

*Gridiron AI relies 100% on data to make it’s rankings. There are no weekly reviews or modifications based on what we or experts think. In data we trust!

Visualizing how ESPN and Gridiron AI compare

In the above graph, the difference between ESPN and Gridiron AI rankings are plotted for each major offensive position. A positive score means ESPN ranked a player higher than Gridiron AI, a negative score means ESPN ranked a player lower than Gridiron AI, and a score of zero means they were ranked the same. As you can see, each position centers around zero, which is what you would expect since both ESPN and Gridiron AI are making informed predictions about the same rankings. Importantly, Gridiron AI is not making the same predictions as ESPN; it is these differences where Gridiron AI can give an edge!

When we visualize how the predicted ranks from ESPN and Gridiron AI compare to how the players actually ranked in the graph above, we see that the distributions for each position are strikingly similar, with one major exception. Gridiron AI predicts a player rank exactly correct more often than ESPN. This is seen by the large projections at 0 for each position.

I looked into some of the variances each week and derived a few learnings.

Key Learnings about Gridiron AI Model:

  1. The model does a good job at identifying players who have non-sustainable stats for a week. (Case study 1 below highlights this)
  2. Players that perform well most weeks get the benefit of the doubt when multiple down weeks are experienced. (Case study 2 below highlights this).
  3. Mid season performance gains that are not consistent take a few weeks for the AI to fully buy into (this is something that we will improve upon this season).

Case study 1: Chris Thompson

Analysis: Chris Thompson had a few weeks where he over-performed his actual skill or role in the offense by scoring a ton of points with very few touches. We (ESPN and Gridiron AI) got it wrong. To be fair, it’s hard to identify these type of performances.

Overall, ESPN kept increasing the overall rank of Chris Thompson throughout the season to a top 20 play while GridironAI wasn’t buying the performance and decreased his overall rank.

Below is Chris Thompson’s 2017 box score.

He scored a lot of points with very few touches, with the exception of week 6

Case Study 2: Kareem Hunt

Analysis: Kareem Hunt had some crazy weeks in 2017 but also had terrible weeks. ESPN did not have Kareem Hunt ranked in week 1 and week 15 (GridironAI Week 1 rank was 18 and Week 15 rank was 3)

Generally, GridironAI loved Hunt all season while ESPN was skeptical more often. The weeks where Hunt underperformed both ESPN and GridironAI had Hunt as a top RB with the exception of week 13 and 14 where ESPN’s lower rank is a huge advantage for their rankings.

Not having Hunt ranked week 1 or week 15 is a big problem for ESPN.

Week 1: Hunt scored 39 FPS,

Week 15: Hunt scored 32 FPS.

Boxscore for Hunt is below:

Case Study 3: Keynan Drake

Analysis: Keynan Drake’s value started when Jay Ajayi got traded to the Eagles. He had a few big fantasy weeks, followed by a few small fantasy weeks.

ESPN was quicker to rank him higher to reflect his number one status on MIA and overall, does a better job of ranking Drake for 2017. The only knock is they ended up over ranking him to end the season. Gridiron AI slowly adjusted Drake down but tended to be more conservative to end the season.

A big takeaway for GridironAI is help our AI better adjust to shifts in playing time from week to week, which is something we will work on throughout the season.

Boxscore for Drake is below:

Conclusion

Over the next few weeks we will continue to tune our models and add important data that will enhance their performance even more. We are excited with how our models are performing and would love any feedback you have on Gridiron AI. We just got a snazzy new email address with our custom domain — info@gridironai.com.

If you’re ready to give Gridiron AI a try, registration is open on our website. And as a reminder incase you skipped the start of this article, we are launching our full site 9/24/2018 just after week 2. Our rankings for the first two weeks will be posted on our blog, for free. Week one can be found here!

👏 * 👏 * 👏

Don’t forget to give us a few claps and maybe a follow if you enjoyed this article and want to read more like them!

Gridiron AI

AI, data science, and analysis for fantasy football

Andrew Troiano

Written by

Data Scientist that is not great at writing profiles. I enjoy baseball and football.

Gridiron AI

AI, data science, and analysis for fantasy football

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade