Regularisation: do I really need you?

azar_e
The Making Of… a Data Scientist
7 min readSep 11, 2018

--

Overfitting… you probably already heard this somewhere, right? It seems like your model is trying so hard to model your training data that it ends up capturing all the noise (points that do not represent your true data, the outliers).

If you already heard about overfitting you probably also heard of the trade-off between bias and variance (check this post for more details).

Regularisation: why do I use you?

Ordinary Least Squares

Before we go further, let me just make it clear that Ordinary Least Squares (OLS) is not a regularisation method; it is a type of linear least squares methods for estimating the unknown coefficients in a Linear Regression model. The problem with this method is that, when having more than one coefficient, there may be high correlation between them that in turn will give very high variance to the model, make it overfit our training data. Check this post for more details on the negative aspect of high variance.

Hence, we wish to control our parameters values, we do not wish them to grow exponentially, out of boundaries. This is the issue with OLS when the coefficients are correlated, they can become abnormally big. A good solution is to put a limit of growth to these coefficients, i.e. regularise our model.

--

--