Ridge or L2 Regularization
To overcome the situations of overfitting and underfitting regularization methods are used. L2 or Ridge Regularization is one such technique.
What Overfitting and Underfitting Means?
When the model too complex features and give high accuracy on training set and low accuracy on testing set it is said to be Overfitted.
L2 regularization is a method of adding a penalty to such a system such that variance is reduced.
The addition of λ x (slope)² ensures a penalization if the slope is too high , hence reducing the complex features.
Understanding With Example
As evident from the Image this is a function of sin(x) , model would be trained on Training samples and then it will be used to predict on Test samples to study its behavior.
When λ is set to zero basically means no regularization is applied the overfit model looks somewhat like this
When λ is set to 9 , lets see the effect
Notice the change in the shape of dome around 1 That happened because our cost function penalized the model for its steeper slope .
If you want to play with different parameters or checkout the code for this Regularization technique , follow the below link