Ridge Regression

Nabin Adhikari
3 min readDec 30, 2022

--

You can access the code from HERE:

Ridge regression is a type of regularized linear regression. In regularized linear regression, a penalty term is added to the objective function that the model is trying to minimize. The purpose of this penalty term is to prevent overfitting, which occurs when a model fits the training data too well and does not generalize well to new data.

Ridge regression is a specific type of regularized linear regression that uses L2 regularization, which adds a penalty term to the objective function which is the sum of the squared coefficients of the model. The objective function for ridge regression is:

Loss = (1/n) * ∑(yi — ŷi)² + λ * ∑bi²

where n is the number of observations, yi is the true value for the i-th observation, ŷi is the predicted value for the i-th observation, bi is the coefficient for the i-th feature, and λ is the regularization parameter. The regularization parameter determines the strength of the penalty term and can be adjusted to control the model’s complexity.

The objective function for ridge regression is similar to the objective function for linear regression, with the addition of the penalty term. The penalty term helps to constrain the coefficients of the model, which can help to prevent overfitting. Ridge regression is particularly useful when there are many features in the dataset, as it can help to reduce the variance of the model and improve the generalization of the model to new data.

Ridge regression is typically used in situations where the goal is to build a predictive model that generalizes well to new data. It can be used in a variety of applications, including predicting stock prices, modeling the relationship between temperature and atmospheric pressure, and predicting the outcomes of sporting events.

In linear regression for regularization, we find (m,b). If m is increasing to High, then it is overfitting, and decreasing too low is Underfitting.

Regularization Types

- Ridge(L2 norm)

- Lasso(L1 norm)

- Elastic Net

Formula

lambda is a hyperparameter that value ranges from 0 to infinity.

Lasso Regression can be 0 but Rigid can never be 0 but close to 0.

Click Image for Zoom Mode

Mathematical Formulation of Ridge Regression

Mathematical Formulation of Ridge Regression

Ridge Regression(L2 Regularization Method)

Regularization is a technique that helps overcome the over-fitting problem in machine learning models. It is called Regularization as it helps keep the parameters regular or normal. The common techniques are L1 and L2 Regularization commonly known as Lasso and Ridge Regression.

While predicting using Ridge Regression, y =ax + b + lambda (slope)2.

This extra term is known as Penalty and lambda determines how severe the penalty will be.

--

--