Why your models might not work after Covid-19

Patrick Robotham
Eliiza-AI
Published in
5 min readApr 3, 2020

--

An introduction to concept and data drift.

I stumbled across the following tweet last week:

Why would our models start failing? What does he mean by “drift”? How do we know if we have to retrain? I’ll try and answer these questions in this blog post, as well as how we can help.

Concept Drift

When we model a phenomenon, we assume that our data is generated by some underlying process. Concept drift occurs when the world changes so that the target variable we’re interested in predicting now comes from a new process. From a statistical modelling point of view, this is like playing a game where the rules can change while you play.

Concept drift is more likely to occur in domains of high complexity, especially when they involve people’s behavior. If you use ML for image recognition, chances are you won’t have to deal with concept drift very much (unless there are changes to the sensor or new kinds of object pop up). On the other hand if you use it for, say, demand forecasting, it would be a good idea to retrain your model with training data that takes the Covid-19 outbreak into account (e.g. by throwing out data before the outbreak).

Business decisions made using models suffering from concept drift are likely to be flawed and impact the bottom line. You may waste money on inappropriate marketing, overstock or understock items, or offer the wrong prices.

Toilet Paper Shortages: an example of concept drift.

Australian supermarkets are currently experiencing a shortage of toilet paper. This is due to concept drift. The amount of an item that’s in stock depends on how much is forecasted to sell. Usually forecasted sales are based on how much sold previously. However the COVID-19 outbreak led to a lot of hoarding and demand for toilet paper spiked. There was then a lag as suppliers and manufacturers struggled to catch up with the new demand. Concept drift may hit supermarkets on the way down as well, as consumers don’t buy any toilet paper at all because they have a 6 month supply.

Below is a (fictional) illustration, based on a moving average forecast. You can see how the old data leads to much greater error than usual.

The spike in demand for toilet paper is a great example of concept drift. A naive forecasting model will assuming that this rise in demand is permanent.

Data Drift

Another cause of model production problems is data drift. While concept drift is about changes in the target variable (an output like forecasted toilet paper sales), data drift occurs when there are changes in the input data. An example of data drift would be a sudden sensor malfunction.

To recognise data drift, log and analyse incoming data. For example, if column averages change drastically, chances are that something is going on with the way data are collected.

Another thing that can happen is that a model is set up to pull some of its data from external sources, but one of the sources is no longer operational or has changed in nature. This would result in a much larger fraction of missing data than the model is set up to cope with. Again this would show up in logs.

To remedy concept drift and data drift once it’s detected, you will have to retrain your model on data generated by the new process.

Fighting Drift

The best tool we have against our models becoming unexpectedly inaccurate is a good model monitoring system. In particular, you should

  1. Log inputs. This is to prevent data drift. A model may be trained for data of one kind, and then start behaving weirdly when it encounters data that was completely out of scope during training.
  2. Log model predictions. If your models start delivering very different outputs than was historically the case, chances are that the domain has changed and you may be subject to concept drift.
  3. Log model accuracy. For some applications of machine learning (e.g. forecasting), you may be able to examine predictive accuracy directly. If predictive accuracy starts decreasing substantially, chances are you have concept drift.
  4. Log upstream behaviour. If your model matters, it will have an effect on the real world. For example a recommendation system will affect consumer behavior. If this behavior suddenly changes, then your model may no longer be appropriate.

When do I retrain?

There are two options for model retraining:

  1. Continual retraining, or
  2. Retraining after an alert is triggered.

To continually retrain a model, you need a machine learning pipeline. This will automate the collection and preprocessing of training data, model training, cross validation and deployment.

If you decide to retrain only when an alert is triggered, you need to find appropriate thresholds. The idea is to view the problem as one of quality control, we can view concept drift as being analogous to a malfunctioning machine part, which would produce greater than usual variance.

A product manager or domain expert can help decide an acceptable model accuracy threshold. If, on the other hand, you alert based on changes in a metric, a starting threshold could be 4 standard deviations away from historical average. If metrics are collected daily, then you would expect a 4 standard deviation change to only happen about once every 43 years.

Which data should I retrain on?

When concept drift occurs, you should not retrain the model naively on old data that was generated by a different process than the one you’re modelling. Sometimes there will be a dramatic event that signals concept drift (e.g. a pandemic outbreak!), and sometimes your model may just gradually get less and less accurate.

There are three different approaches for handling this:

  1. Exclude all training data prior to concept drift, this would result in much less training data available and is only viable in the “dramatic event” case.
  2. Weighting your training data by recency, so that the most recent data is weighted the most heavily.
  3. Creating a new model that explicitly corrects for the source of concept drift. An example of this would be forecasting models that allow for different behavior during holidays.

Getting help

As we have seen, drift is a real issue for systems dependent on machine learning or statistics, such as inventory management, pricing, advertising, or trading. If you think you have a problem with concept drift or data drift, we at Eliiza can help you, through advice, designing custom drift monitoring and retraining pipelines, as well as integrating automated services into your cloud platforms. Please get in touch if you want to know more. AWS, Azure and Google Cloud Platform all offer tools for model monitoring, and we have experience integrating them into machine learning pipelines

--

--