Visual Parameter Tuning with Facebook Prophet and Python

Visualizing time series cross validation

Bryant Crocker
4 min readDec 19, 2018

Facebook prophet is by far my favorite python package. It allows for quick and easy forecasting of many time series with a novel bayesian model. Prophet estimates various parameters using a general additive model. More on Facebook prophet can be found right here. Forecasting is important in many business situations, particularly supply chain management and demand planning.

In their paper about Facebook prophet, the package authors recommend using visual means to asses the quality of forecasts. This is essential because a scalar model metric often doesn’t tell us much about how the model performs over time. The package provides convenient plotting functions to see how prophet predictive performance changes over time. The authors suggest the use of shiny application to tune parameters. A while back I made a shiny app that allows the user to visually assess prophet models on commodity data.

Now, the python part!

The focus of this article will be another way to achieve the same thing using simple python code. I Personally have not been able to find any examples of visual parameter tuning with prophet, so I decided to write some quick code to do that.

I start by simply importing essential libraries and pulling historical Russel 2000 data from Yahoo. The Russel 2000 has been very rocky the past few months, it would certainly be interesting to forecast where it’s going.

#Import Necessary Libaries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import quandl
from fbprophet import Prophet
from fbprophet.plot import plot_cross_validation_metric
from fbprophet.diagnostics import performance_metrics
from fbprophet.diagnostics import cross_validation
#this is needed for ffn to import
pd.core.common.is_list_like = pd.api.types.is_list_like
import ffn
import fix_yahoo_finance as yf
#this makes fix yahoo finance work
yf.pdr_override()
#grab the Russel 2000 data and select adj close
mydata = yf.download('^RUT', start="2012-01-01")
RUT = mydata[['Adj Close']]
%matplotlib inline
plt.style.use('seaborn')

The next step is first do a quick exploratory plot to make sure the data is what we think it is. This is always a necessary step in data analysis.

fix, ax = plt.subplots(1,1,figsize = (14,8))
RUT.plot(ax = ax)
ax.set_title('Russel 2000 Adjusted Closing Price')
ax.set_ylabel('Closing Price')
plt.show()

It looks like things are as I expected, a quick Google search could be used to confirm this is what we expect the prices to look like. The next step will be fitting prophet models and visualizing changes to the parameters.

In this example I will tune the change_point_prior parameter for the model. Parameter tuning is an important step for any machine learning model. During parameter tuning different values of a parameter are tried and the value that optimizes predictive accuracy is chosen. In this example the mean absolute percentage error is used to evaluate models. This is the metric suggested by the authors.

In order to evaluate the model I use time series cross validation. Time series cross validation is different than regular k-fold cross validation that is typically used with machine learning models. With Facebook Prophet’s time series cross validation you select historical cutoff points and fit the model using data up to that cutoff point. More information about this procedure can be found here.

Time series cross validation is easy with Facebook prophet and is included with the package.

fig, ax = plt.subplots(10,1, figsize = (14, 20))
ax = ax.ravel()
j = 0
for i in [0.01, 0.05, 0.10, 0.15, 0.20]:
RUT2 = RUT.reset_index()
RUT2 = RUT2.rename(columns = {'Date' : 'ds', 'Adj Close' : 'y'} )
m = Prophet(changepoint_prior_scale=i)
#fit the prophet model on the data
m.fit(RUT2)
# make a dataframe of the next two years
future = m.make_future_dataframe(periods=365)
#predict on this future dataframe
forecast = m.predict(future)
# plot the forecast
fig = m.plot(forecast, ax = ax[j])
ax[j].set_title('changepoint prior = ' + str(i))
j += 1
df_cv = cross_validation(m, initial='365 days', period='180 days', horizon = '365 days')
df_p = performance_metrics(df_cv)
fig = plot_cross_validation_metric(df_cv, metric='mape', ax = ax[j])
ax[j].set_title(str(str(i) + ' Change Point Prior Mean Absolute Percentage Error'))
ax[j].set_ylim(0,0.5)
j += 1

It’s easy to see visually that varying the change_point_prior doesn’t make a big effect on the accuracy of the model. It appears that the default value of 0.05 is the best choice. This tends to occur when tuning prophet models in my experience. Often I don’t tune them at all because of this.

--

--