Ridge or L2 Regularization

Aarish Alam
Analytics Vidhya
Published in
2 min readFeb 8, 2021

To overcome the situations of overfitting and underfitting regularization methods are used. L2 or Ridge Regularization is one such technique.

What Overfitting and Underfitting Means?

When the model too complex features and give high accuracy on training set and low accuracy on testing set it is said to be Overfitted.

L2 regularization is a method of adding a penalty to such a system such that variance is reduced.

The Formula for Ride Regression

The addition of λ x (slope)² ensures a penalization if the slope is too high , hence reducing the complex features.

Understanding With Example

As evident from the Image this is a function of sin(x) , model would be trained on Training samples and then it will be used to predict on Test samples to study its behavior.

When λ is set to zero basically means no regularization is applied the overfit model looks somewhat like this

Overfitted Model

When λ is set to 9 , lets see the effect

for λ is set to 9

Notice the change in the shape of dome around 1 That happened because our cost function penalized the model for its steeper slope .

If you want to play with different parameters or checkout the code for this Regularization technique , follow the below link

Github Repo Link

--

--