Explaining Financial Deep Learning Forecasts

Predicto
The Startup
Published in
4 min readSep 17, 2020
Through the “Eyes” of a Deep Learning model

We already covered in a previous post, how important it is to deal with uncertainty in financial Deep Learning forecasts. In this post, we’ll attempt a first introduction on how we deal with explainability.

Neural networks have been applied to various tasks including stock price prediction. Although highly successfully, these models are frequently treated as black boxes. In most cases we know that the performance on the test data is satisfying, but we do not know why the model came up with a specific output.

There are cases where an “explanation” of the model’s conclusion is desirable, if not necessary. Examples where explainability is a practical necessity include the diagnosis of medical conditions and the control of self-driving cars. In the first situation an explanation of the model’s output can help the expert make the final decision and decide whether to trust the prediction or not. In the latter, such an “explanation” can help identify faulty decision making processes that can have disastrous outcomes.

In the context of stock price prediction and trade recommendation the goal of explainable AI is two-fold:

  1. To help the investor assess whether to trust a recommendation or not. In combination with forecast uncertainty measurement, it can be a powerful tool.
  2. To identify what our model “saw” when it did a good or bad job predicting a movement. We can then use this information to iterate and improve our models by adding/removing features as we go.

In Predicto, we aim in providing a platform where experienced and familiar with AI investors and traders can study explainable financial forecasts based on deep learning models.

Let’s study a real case from our platform:

Figure 1: Short-term forecast for Home Depot (HD) stock price movement on August 29th 2020

In the above scenario (Figure 1) we would like to explain why our model was successful at predicting the stock price of Home Depot. One such explanation can be provided by examining the importance of each feature in the prediction of the model. A stronger explanation would also include the time period of the observed data that was important in that prediction.
Our strategy will be to “skip” or exclude each individual feature in the analysis. If the prediction changes drastically, then the feature is essential in the decision-making process. We are going to “skip” the technical details of how to skip features and save it for a future technical post.

Figure 2: Feature influence as a whole
Figure 3: Feature influence per period

Figure 2 shows the overall influence of each individual feature for the Home Depot case. In Figure 3 we go one step further and break down the analysis
into distinct time periods. Figure 4 shows the deviation from the original prediction induced by skipping each feature.

Figure 4: Different predictions from our model by removing specific features each time

So in this scenario (Figures 2, 3 and 4) we observe a specific feature’s high influence, and more specifically, the last 2 weeks measurements of that feature. The feature that stands out in Figures 2 and 3 is the one that was skipped from the dark brown dashed prediction line in Figure 4 that is located just on the edge of our original forecast’s bottom confidence interval limit. Based on the information we get from those 3 figures, we can form an opinion that last 2 weeks movements of that feature are correlated with the prevention of a bigger drop in this stock’s price! Now it’s up to the human observer to decide whether this is useful information but doing some more independent research related to that feature’s time period movements.

Don’t get us wrong though: This is just one component that can be used to explain a prediction. A forecast can be influenced by specific combinations of features and other factors. In this post we have just scratched the surface of explainability.

The goal of investigating these feature influence graphs is to provide a tool for investors so that they can dig deeper and decide for themselves if a prediction makes sense based on the features and time periods that appear to have influenced the model’s forecast.

This concludes our brief overview of how Explainable Deep Learning models can point us to the right direction. We’ll cover technical details of our uncertainty estimation and explainability generation in a future post.

See you soon and stay safe!

Web https://predic.to — Twitter @ThePredicto — GitHub ThePredicto

--

--

Predicto
The Startup

Stock & Cryptocurrency Forecasting AI. Based on Options Data. Powered by Intelligible Deep Learning models. https://predic.to