Machine Learning & Artificial Intelligence add new wings to Amusement Parks
Amusement Parks Business Overview
In the global entertainment and leisure industry, amusement parks play a vital role. These recreational facilities offer a wide variety of entertainment options and act as a one-stop-shop solution. Based on industry speculations, the global amusement parks market is expected to recover and grow at a CAGR of 9% from 2021 and reach $89.5 billion in 2023. It seems North America accounts for 33% of the market share in 2019, followed by Asia-Pacific with 29%. For years, there has been a relatively stable share of adults, teens and children to theme parks. A growing population resulted in the growth of the market, but the same visitors may be going back to theme parks year after year. The issue lies in a theme park’s ability to not only capture new visitors, but also recapture lapsed ones. Without them, the market may not reach its full potential. It has been observed that parks that are reaching capacity on their busiest days should also consider how they can have smooth attendance throughout the year. This is possible with simple applications of Machine Learning and Artificial Intelligence.
Machine Learning and Artificial Intelligence have found their way in different areas in the entertainment industry. Some theme parks have given the vision for the future which involves introduction of AI robots of their most beloved cartoon characters, wrist-band (a park-wide pass) that tracks customers’ movements, analyzes purchasing habits, and reports real-time data, recommendation engines, content creation and delivery to predict and adapt according to consumer tastes, preferences to delight users with every interaction. It recognizes patterns of good stories and predicts what will resonate with consumers, programs that can analyze viewer facial expressions to predict engagement with content and Deep Learning speech animation processes. The use of Virtual and Augmented Reality technologies within amusement parks provides an immersive experience to customers. However, in order to make all of this a success, the major questions to be answered are: Where is the largest and fastest growing market for amusement parks? How does the market relate to the overall economy, demography and other similar markets? What forces will shape the market going forward? That is, characteristics, segments, competitive landscape, market shares etc.
Problem Statement
Following on this conversation, if we think through one of the major concerns for our client in the Leisure & Entertainment industry was: how will be the business post COVID? How many customers are expected to visit? Based on the information w.r.t visitations to the theme parks, informed decisions around marketing efforts, improvement in the efficiency of a supply chain system, revenue and sales prediction, inventory and resource optimization, planning for park operations can be made.
Why is there a need for Forecasting?
Footfall forecasting is key for parks to understand the number of visitors in their parks during any given time period. From this data, parks can obtain and comprehend information about the number of people physically in the parks and the number of people leaving without converting to a specific goal. With the integration of other customer-associated data, including demographics, pathway and interest zones, the marketing strategy and parks performance can be substantially enhanced. Besides increasing the conversion ratio, other results to be achieved are:
· Customer flow management
· Identifying customer trends
· Deciding on the best possible opening hours
· Improving staff planning and optimizing employee expenditures
· Interpreting the results and effects of promotions or marketing campaigns
· Endogenous and exogenous factors that will have an effect on customer footfall
· Look at the busiest hours to optimize the relationship between customer and staff
Core aspects of this use case
In Leisure & Entertainment, the mechanisms of thought are often distributed. Historical data plays a vital role. However, at the heart of this view is the fact that where the causal contribution of certain internal elements and the causal contribution of certain external elements are equal in governing behavior. The expected demand can be a function of different kinds of structural variations like trend or seasonality. Regular demand patterns of customers may be affected by price variations and weather. Demand peaks may occur due to promotions and holidays. The seasonality in the daily customer traffic and the effects of holiday at different locations can be used.
Our approach
In this case study, we have analyzed the amusement park’s data to understand how historical footfall, promotion, revenue channels, climate variability, digital interactions, events, holidays and economic factors effect customer visitation to parks, and how we can use these features to forecast future footfall. Setting up an appropriate validation framework is extremely important. It enables you to try and experiment various models and objectively compare them. Lag-based features are very useful in providing trends information about the time series data.
Keeping the above in mind, we started exploring the most popular methods for Multivariate forecasting like Vector Autoregression Moving-Average with Exogenous Regressors (VARMAX), Facebook’s Prophet and Seasonal Autoregressive Integrated Moving-Average with Exogenous Regressors (SARIMAX).
Data Understanding:
There are 5 areas of data that impact this problem:
Internal Data: Data contained in an EDW, which contains sales history (i.e. tickets purchased, FNB purchased, customer purchase history)
Digital Data: This includes data from online activities of visitors
Economic Data: Data available that is external to the organization including oil price, Consumer Price Index and other non-traditional data sources
Environmental Factors: Data on climatic conditions for specific country like temperature, day length
Holiday & Events: Data on vacations, weekends and events for specific countries
Displaying the columns and data from the dataset. The data is at a daily level from 2014 till 2019 with 13 features. For time series data set, it is imperative to set the datetime as the index for the dataset:
Combining Data and Final Visualizations
This section displays the behavior of each data feature. It ends with a brief discussion on how we selected features and combined the various data sources. The external data was merged with the existing data using the appropriate lag times. In cases where the external data is monthly or weekly, we use linear interpolation to create daily values to be able to combine the data. Note that there are possibly better interpolation approaches, and that could be an area of improvement in the future.
Clearly, there are some issues when the data is grouped this way. In particular, there are several series that seem to have little or no recent data; those will present a problem in the model. We generated a summary by each variable to investigate those that have very unusual time history. It can be seen that for the purpose of this analysis, Promotion Driven Visitors and New Online Users are essentially irrelevant, so we won’t include them.
Also, FNB Spend seems to have low sales and almost nothing on some days, so we exclude it as well, at least initially. Note for the purpose here, we will press on to some modeling. But before that, let’s look at our target variable as well.
Although we can see a fairly clear trend in historical footfall, the data is noisy, and it’s not evident there are patterns that can be modeled. There are big dips at the end of each year as well as some possibly recurring patterns within the year. We chose to move forward with modeling the smoothed data as the target output. Stationary is a very important factor on time series. To check if the data is stationary, we will use Augmented Dickey-Fuller test. It is the most popular statistical method to find if the series is stationary or not. It is also called Unit Root Test. As we know, stationary time series don’t have change mean or variance over time.
Modeling
We are now ready to begin modeling. The approach we like to take is to build a simple model as a baseline right away, then use that to compare more complicated methods. If we can’t achieve significant improvement over the baseline, it isn’t fruitful to use more complex models. Here, I will build a simple FB Prophet model with some hyperparameter tuning to see if that improves the model. From that result, we’ll move to a VARMAX and SARIMAX to try and improve the model performance.
a. FB Prophet Method
First, we experimented with Prophet along with additional regressors. Our goal was to check how extra regressor would weigh on forecast calculated by Prophet. Prophet is a procedure for forecasting time series data based on an additive model, where non-linear trends are fitted with yearly, weekly and daily seasonality, plus holiday effects. The Prophet uses a decomposable time series model with three main model components: trend, seasonality and holidays. Additional regressors can be added to the linear part of the model using the add_regressor method or function:
Forecast and Estimation
We have used 10 exogenous series along with historical footfall data at daily level:
Prophet gives us upper and lower bounds, so we can make plans based on these values as well. It also decomposes output to trend and seasonal components. You can see the filtered output below, and upper and lower bounds are removed for better transparency. Our forecast is plotted in the graph below. If you look carefully, the red line (our prediction) leads the blue line (actual). This is not good. It means our forecast is always ahead of reality.
In terms of the evaluation metric, the root mean squared error (RMSE) was used to assess the predictive power of each of the approaches.
By analyzing these results, we can conclude that, in general, the best size for the historical window to use is 30 days. In fact, with this value, the results obtained are generally better in terms of RMSE. We can also see that Prophet failed to produce good forecasts for the datasets used with RMSE = 39 %. It did not capture the variation for lower values and certain peaks were also missed. Both time series components and features are key to interpreting the behavior of the time series.
b. VARMAX Method
One of the most well-known and widely used families of time-series models includes the Vector Autoregression Moving-Average with Exogenous Regressors (VARMAX). It is a combination of VAR and VMA and a generalized version of ARMA model for multivariate stationary time series. The VARMAX model is generically specified as:
We have used 10 exogenous series. Note that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series.
Forecast and Estimation
We observe that VARMAX is able to capture the variations on smaller peaks but couldn’t capture high peaks. This model has better approximated the real values with RMSE = 28% when altered certain hyperparameters. We’ve given our model the ability to course correct a bit by allowing it to consider the magnitudes and directions of its errors. This comes at the expense of a longer model estimation process.
From the estimated VARMAX model, we have plotted the response functions of the endogenous variables.
Just like with Prophet model, the only flaw we noticed here, it is not supporting the intercorrelations between multiple variables to forecast some output.
c. SARIMAX Method
The Seasonal Autoregressive Moving Average with Exogenous Input, or SARIMAX model is an approach for modeling time series data that may contain trend and seasonal components. This model has hyperparameters that control the nature of the model performed for the series. We are experimenting SARIMAX with Exogenous factors that will have an effect on the time series prediction.
Forecast and Estimation
The inclusion of weekly seasonality and holiday effects at different time periods revealed that SARIMAX models are better frameworks. Over here SARIMAX models the seasonal element in Multivariate data.
With Model Extension Framework, we quickly fine-tuned parameters and compared the results against the one from Prophet and VARMAX with minimal parameter setting. We found that the SARIMAX with fine-tuned parameters produced better results for this data.
That said, SARIMAX seems to be the best performing methods overall, with RMSE = 24%.
Evaluating the models
We have use Root Mean Square Error (RMSE) for evaluating the model’s performance. It has been observed that on long time scales, the models are not yet predicting all the behaviors seen in the target data, although most of the large changes are modeled fairly well. The table shows that even the simple Prophet model does better on the periods targeted, and the SARMIAX improves on that by multiple percentage points.
Results indicate that, although a one-size-fits-all approach does not exist, traditional statistical method such as SARIMAX overall prevails over other approaches.
Conclusions and Next Steps
We have seen that FB Prophet model, VARMAX and SARIMAX can be effective to model complex time-series data. Advantages of this approach include the ability to integrate business and external factors as well as historical data as predictors. With all the strong hypothesis made throughout the document, the final forecast is good and describes the series accurately.
Tracks of improvement to this method could be investigated further. It is clear that there are processes that are not yet incorporated into our model. For example, there may be additional factors that could be incorporated as external data that drives visitors like hotel bookings, incoming flights, school holidays. In addition, we believe some of the large swings at mid-year cycles are due to business practices–as half-year targets approach, sales is pushed to sell more tickets or pull in opportunities. There may be customer incentives in play to drive some of that. These factors can be integrated if a model of the actual business practices can be defined.
Another area of work would be to investigate and improve modeling of components of the forecast, such as modeling individual feature. Although the model is built with that level of granularity, we may have noticed in the baseline model that some coefficients are negative. I think that is due to interactions or other factors unaccounted for. But it means that the model may not work well if used to predict one category. This could be tackled by modeling each desired category separately, and then combining those models.
Please feel free to share your opinions and thoughts.
Happy learning!