Forecasting Football Fever

Exploring Seasonal Datasets in Deephaven

Deephaven Data Labs
Aug 7, 2020 · 7 min read

By Aditya Pethe

Image for post
Image for post
Photo by Dave Adamson on Unsplash

From September to January every year, football takes over America. Games dominate TV Sunday and Monday nights, and my brother tears his hair out each week over his consistently underperforming fantasy teams. The hype seems to reach an unbearable level by the time the playoffs roll around.

But is there a way to measure and forecast that hype? I decided to use one of my favorite NFL players, Peyton Manning, in order to explore seasonality in Deephaven’s Jupyter Notebooks. Using a dataset of Manning’s Wikipedia search frequencies taken over an 8 year period from 2008 to 2016, my goal was to break down how football hype evolved throughout the season.

To do this, I decided to take two approaches to analyzing seasonality. The first was the traditional ARIMA model, and the second was the newer Fbprophet library. I would use both these methods to fit, predict, and validate models to see which was better at understanding NFL hype.

OUR DATA

We can plot our data in Deephaven with the following code:

At a top-level glance, our data is log-transformed Wikipedia page views for Peyton Manning taken each day for about 8 years. The data appears to exhibit some strong seasonal trends that we can look into.

Image for post
Image for post

Additionally, before we begin breaking down our data, we want a consistent way to visualize our forecasts. We can produce a function that takes our training, testing, and any forecast data and plots it with Deephaven. This allows us to combine analysis from multiple libraries and methods with Deephaven’s powerful and interactive plotting.

ARIMA

The ARIMA model stands for autoregressive, integrated moving average model.

The Autoregressive, or AR component of the model, is a linear combination of the previous N seasonal lags. For our Peyton Manning model, this means some linear combination of the previous N weeks, months, or years.

Image for post
Image for post

The moving average component of the model is a linear combination of the error terms for the previous N seasonal lags, like so:

Image for post
Image for post

The ARIMA model will estimate the coefficients for both these linear combinations, given three parameters as input:

  • p: The order of the autoregressive model (the number of lagged terms), described in the AR equation above.
  • q: The order of the moving average model (the number of lagged terms), described in the MA equation above.
  • d: The number of differences required to make the time series stationary. A stationary time series is essentially a time series without a time-dependent trend, excluding the seasonality.

In the example below, the blue time series would be considered stationary, while the red would be nonstationary, even though both may exhibit seasonal patterns.

Now that we know what parameters we need to find, we can analyze our Peyton Manning data. At first glance, our data seems stationary. There doesn’t appear to be a time-dependent trend outside seasonal fluctuations, but we can test for this using the Augmented Dickey-Fuller Test.

Image for post
Image for post

Our test returns a p-value well below the significance level, so we can confirm that our model is indeed stationary. Our parameter value for d is zero.

Now we need to find the parameter values of P and Q. In order to do this, I used autocorrelation plots. Autocorrelation and partial autocorrelation plots can tell how strongly lagged terms correlated with a given observation. While partial autocorrelation plots tell the correlation with the lag term independent of other lags, autocorrelation plots factor in the “inertia” from other lags. Because of this, we can use partial autocorrelation to estimate our parameter for P, and autocorrelation to estimate our parameter for Q.

Image for post
Image for post
Image for post
Image for post

Both plots show a periodic behavior in the lags, each around 7 days in length. This makes sense — Peyton Manning search frequency probably increases on game nights, when football is being played. In fact, these autocorrelation plots even show a slight 6-day correlation, which is likely due to Sunday night football. But since the lags of 7 days have the highest correlation with the observed value, we can estimate both P and Q to be 7.

I should note that these autocorrelation plots presented a problem. The ARIMA parameters did not allow for lag inputs of over ~10, which meant that looking at annual (365) or monthly (30) seasonality would be very difficult.

Now that we have our parameters, we can produce our ARIMA model.

Before we make our forecasts, we can check our model assumptions for variance and normality with a residual plot and density plot.

Image for post
Image for post

Since the residuals appear to be randomly distributed, and the kernel probability density plot appears normal, our model assumptions check out.

Plotting our model yields the following:

Image for post
Image for post

As we can see, not having access to the other scales of seasonality hurts this model’s viability. Not being able to capture multiple seasonal trends means that ARIMA is limited by one seasonality at a time. Regardless, we can return some error estimators to validate our model.

  • MSE (mean squared error): 0.8916776825661407
  • MAPE (mean absolute percentage error): 0.10230290573107942

SARIMA

We can actually validate our ARIMA model using the auto-SARIMA model from pmdarima. The auto-SARIMA model estimates the parameter values for p, q, and d for us so there is no need for the prelude above. In addition, SARIMA takes m, the period of seasonality, as a parameter. Unfortunately, the model parameter limitations again constrain us to m < 10, so we may only look at weekly seasonality.

Fitting and plotting our model gives us the following:

Image for post
Image for post

Lastly, we can validate our model with error metrics:

  • MSE (mean squared error): 0.8916776825661407
  • MAPE (mean absolute percentage error): 0.10789283997956421

We see that our SARIMA model performed nearly identically to our ARIMA model, and in fact our ARIMA model gave a slightly lower mean absolute percentage error than SARIMA. We can be happy that we picked optimal parameters to fit our ARIMA model with.

PROPHET

For our final model, we will be using Fbprophet.

Fbprophet is a library from Facebook intended to handle seasonal time-series datasets. Prophet implements a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. In general, using Prophet requires much less hands-on work than our ARIMA model, and for the most part, we can feed our data directly into prophet like so:

This allows us to forecast one year ahead, and compare actual data with expected values and their boundaries.

Image for post
Image for post

In addition, Prophet allows us to break down this data into seasonal components:

Image for post
Image for post

Manning’s page views peaked in 2012–2013, his MVP year. Unsurprisingly, Monday night football is when most fans look Manning up, and the monthly seasonal breakdown shows the crazy highs of December and March in stark contrast to the great drought of the summer.

Prophet can do even more, and add changepoints to the data, where the trend is most likely to shift.

Image for post
Image for post

With this feature, Prophet roughly estimates the start and end of the season, especially capturing the window of the playoffs.

By the eye test alone, our prophet models look much better and coherent than ARIMA. But we can again validate the model predictions using MSE and MAPE.

  • MSE (mean squared error): 0.35800021765342394
  • MAPE (mean absolute percentage error): 0.059460265364126956

CONCLUSION

Both error estimators clearly point to Prophet as the more accurate model. For large time-series data with multiple seasonalities, ARIMA has many shortcomings. Simply using regression on previous lags to estimate future values won’t cut it in predicting more complex time-series datasets. ARIMA may be useful for more limited datasets with simpler seasonal effects, but particularly for things like sensor data, page views, or energy consumption, complex nonlinear models like Prophet are required to make predictions.

Deephaven’s integration with Jupyter Notebooks allows for users to have unique, library-specific plotting methods and operations side by side with Deephaven features. Deephaven’s plotting in particular provides user-friendly visualization options in interactive plots when used in conjunction with new, cutting edge libraries like fbprophet.

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

By Dev Genius

The best stories sent monthly to your email. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Deephaven Data Labs

Written by

Deephaven is high-performance time-series database, complemented by a full suite of API’s and an intuitive user experience. Check out deephaven.io

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Deephaven Data Labs

Written by

Deephaven is high-performance time-series database, complemented by a full suite of API’s and an intuitive user experience. Check out deephaven.io

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store