A Simple Explanation Of L1 And L2 Regularization

Overfitting, Regularization, and Complex Models

Kurtis Pykes
Geek Culture

--

Photo by steffen wienberg on Unsplash

When a machine learning model performs poorly, it is either because it has underfitted or overfitted the training data. Underfitting is when the learned hypothesis is unable to accurately capture the relationship between input and output features — this results in bad performance on the training data and test data. If we only consider the model, a good solution to underfitting models would be to use a different model that can learn complex relationships.

In contrast, overfitting is when the learned hypothesis fits the training data so well that it performs poorly on unseen instances — we say the model is unable to generalize. This is a common feat when we use complex models. To control the behavior of these models, we use something called regularization.

Note: The interested reader may read my Neptune article, Fighting Overfitting with L1 or L2 regularization, for a more in-depth version of this article.

What is Regularization?

--

--

Kurtis Pykes
Geek Culture

I ghostwrite Educational Email Courses for high-ticket B2B service founders. https://www.thesocialceoblueprint.com/