MLOps: Detect and Overcome Model Drift

Alessandro Lamberti
Artificialis
Published in
3 min readJun 16, 2022

--

Photo by Chris Lawton on Unsplash

Machine Learning models can be very sensitive to change.
In A Comprehensive Guide on How to Monitor Your Models in Production, I state how, despite its weights being identical throughout the time, the environment around the model will inevitably change, causing the model to become obsolete.

Imagine a Machine Learning model trained to classify whether or not an email contains spam/phishing content: you’ll probably start off with a dataset found online, or some data provided to you.

However, time will pass. Threats will change, hackers, cybercriminals and scammers will become smarter and bring new ideas and once that happens, you’ll exit from the domain knowledge your model learned, and won’t be able to classify spam emails properly anymore. This is one of the many examples of Model drift.

Model drift is a threat to your system if the environment in which it’s applied is constantly changing, while if it’s inserted in a static environment, and it’s provided data statically (meaning from the same distribution as the training one), its performances shouldn’t differ much.

There are two main types of model drift you should take into consideration:

  • Data drift — the most common one. It occurs when the statistical properties of certain predictors change. As the…

--

--