Regularized Linear Regression

Prashant bhardwaj
AlmaBetter
Published in
6 min readApr 8, 2021

Linear models (LMs) provide a simple, yet effective, approach to predictive modeling. Moreover, when certain assumptions required by LMs are met (e.g., constant variance), the estimated coefficients are unbiased and, of all linear unbiased estimates, have the lowest variance. However, in today’s world, data sets being analyzed typically contain a large number of features. As the number of features grow, certain assumptions typically break down and these models tend to overfit the training data, causing our out of sample error to increase. Regularization methods provide a means to constrain or regularize the estimated coefficients, which can reduce the variance and decrease out of sample error.

Why regularize?

The easiest way to understand regularized regression is to explain how and why it is applied to ordinary least squares (OLS). The objective in OLS regression is to find the hyperplane (e.g., a straight line in two dimensions) that minimizes the sum of squared errors (SSE) between the observed and predicted response values (see Figure below). This means identifying the hyperplane that minimizes the grey lines, which measure the vertical distance between the observed (red dots) and predicted (blue line) response values.

More formally, the objective function being minimized can be written as:

where:

  • n is the total number of observations (data).
  • yi is the actual output value of the observation (data).
  • p is the total number of features.
  • βj is a model’s coefficient.
  • xij is the ith observation, jth feature’s value.

The OLS objective function performs quite well when our data adhere to a few key assumptions:

  • Linear relationship;
  • There are more observations (n) than features (p) (n>pn>p);
  • No or little multicollinearity.

Many real-life data sets, like those common to text mining and genomic studies are wide, meaning they contain a larger number of features (p>np>n). As p increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter 4 where the presence of multicollinearity was diminishing the interpretability of our estimated coefficients due to inflated variance. By reducing multicollinearity, we were able to increase our model’s accuracy. Of course, multicollinearity can also occur when n>pn>p.

Having a large number of features invites additional issues in using classic regression models. For one, having a large number of features makes the model much less interpretable. Additionally, when p>np>n, there are many (in fact infinite) solutions to the OLS problem! In such cases, it is useful (and practical) to assume that a smaller subset of the features exhibit the strongest effects (something called the bet on sparsity principle (see Hastie, Tibshirani, and Wainwright 2015, 2).). For this reason, we sometimes prefer estimation techniques that incorporate feature selection. One approach to this is called hard thresholding feature selection, which includes many of the traditional linear model selection approaches like forward selection and backward elimination. These procedures, however, can be computationally inefficient, do not scale well, and treat a feature as either in or out of the model (hence the name hard thresholding). In contrast, a more modern approach, called soft thresholding, slowly pushes the effects of irrelevant features toward zero, and in some cases, will zero out entire coefficients. As will be demonstrated, this can result in more accurate models that are also easier to interpret.

With wide data (or data that exhibits multicollinearity), one alternative to OLS regression is to use regularized regression (also commonly referred to as penalized models or shrinkage methods as in J. Friedman, Hastie, and Tibshirani (2001) and Kuhn and Johnson (2013)) to constrain the total size of all the coefficient estimates. This constraint helps to reduce the magnitude and fluctuations of the coefficients and will reduce the variance of our model (at the expense of no longer being unbiased — a reasonable compromise).

As will be seen in future optimization applications, this function is much better suited to serve as a loss function, a function minimized that aptly models the error for a given technique. Many different models other than regularized linear models use the SSE error term as a term in their respective loss functions.

Ordinary Least Squares

Now that linear modeling and error has been covered, we can move on to the most simple linear regression model, Ordinary Least Squares (OLS). In this case, the simple SSE error term is the model’s loss function and can be expressed as:

Using this loss function, the problem can now be formalized as a least-squares optimization problem. This problem serves to derive estimates for the model parameters, β, that minimize the RSS between the actual and predicted values of the outcome and is formalized as:

The 1/(2n) term is added in order to simplify solving the gradient and allow the objective function to converge to the expected value of the model error by the Law of Large Numbers.

Aided by the problem’s unconstrained nature, a closed-form solution for the OLS estimator can be obtained by setting the gradient of the loss function (objective) equal to zero and solving the resultant equation for the coefficient vector, β̂. This produces the following estimator:

However, this may not be the only optimal estimator, thus its uniqueness should be proven. To do this, it will suffice to show that the loss function is convex since any local optimality of a convex function is also global optimality and therefore unique.

One possible way to show this is through the second-order convexity conditions, which state that a function is convex if it is continuous, twice differentiable, and has an associated Hessian matrix that is positive semi-definite. Due to its quadratic nature, the OLS loss function is both continuous and twice differentiable, satisfying the first two conditions.

To establish the last condition, the OLS Hessian matrix is found as:

Furthermore, this Hessian can be shown to be positive semi-definite as:

Thus, by the second-order conditions for convexity, the OLS loss function is convex, thus the estimator found above is the unique global minimizer to the OLS problem.

What does Regularization achieve?

A standard least squares model tends to have some variance in it, i.e. this model won’t generalize well for a data set different than its training data. Regularization, significantly reduces the variance of the model, without substantial increase in its bias. So the tuning parameter λ, used in the regularization techniques described above, controls the impact on bias and variance. As the value of λ rises, it reduces the value of coefficients and thus reducing the variance. Till a point, this increase in λ is beneficial as it is only reducing the variance(hence avoiding overfitting), without loosing any important properties in the data. But after certain value, the model starts loosing important properties, giving rise to bias in the model and thus underfitting. Therefore, the value of λ should be carefully selected.

This is all the basic you will need, to get started with Regularization. It is a useful technique that can help in improving the accuracy of your regression models. A popular library for implementing these algorithms is Scikit-Learn. It has a wonderful api that can get your model up an running with just a few lines of code in python.

If you liked this article, be sure to show your support by clapping for this article below and if you have any questions, leave a comment and I will do my best to answer.

--

--

Prashant bhardwaj
AlmaBetter

My name is Prashant Bhardwaj and currently I am doing post graduation in Mathematics and Scientific Computing from NIT Warangal.