Simple Linear Regression: An Introduction to Regression from scratch

Imdadul Haque Milon
Gadictos
Published in
7 min readJul 13, 2019

Regression is a very fundamental concept in statistics, machine learning, and Neural networks. Imagine plotting the correlation between rainfall frequency and agriculture production in high school. Increase in rainfall generally increases agriculture production. Fitting a line to those points enables us to predict the production rate under different rain conditions. It was actually the very simplest form of linear regression.

In simple words, regression is a study of how to best fit a curve to summarize a collection of data. It’s one of the most powerful and well-studied types of supervised learning algorithms. In regression, we try to understand the data points by discovering the curve that might have generated them. In doing so, we seek an explanation for why the given data is scattered the way it is. The best-fit curve gives us a model for explaining how the dataset might have been produced. There are many types of regression e.g. simple linear regression, polynomial regression, multivariate regression. In this post, we will discuss simple linear regression only, and later we will discuss the rest. We will also provide the python code from scratch at the end of the post

Simple regression, as the name implies, it’s just a very simple form of regression, where we only work on one input and we’ll try to fit a straight line.

Data Set

Consider a data set containing age and the number of homicide deaths in the US in the year 2015:

We have shown a chunk of the whole dataset above and if we plot the dataset with the line best fit to it we see:

When we are talking about regression, our goal is to predict a continuous variable output given some input variables. For simple regression, we only have one input variable x which is the age in our case, and our desired output y which is num of homicide deaths for each age. Our dataset then consists of many examples of x and y:

where N is the number of examples in the data set and for our homicide death data set, it will look like:

Model

Now, how can we mathematically model single linear regression? Since the goal is to find the perfect line, let’s start by defining the model (the mathematical description of how predictions will be created) as a line. We’re assuming we have just one input, which in this case is the age of people and one output which is the number of homicide deaths. Our goal is to fit a line and we know the equation of a line is:

Fitted linear regression line of the Data Set.

what this regression model then specifies is that each one of our observations yᵢ is simply that function evaluated at xᵢ, so that’s w₀ plus w₁ *x₁ plus the error term which we call εᵢ. So this is our regression model:

and to be clear, this error, εᵢ is the distance from our specific observation back down to the line. The parameters of this model are w₀ and w₁ are intercept and slope and we call these the regression coefficients.

Quality Metric

We have chosen our model with two regression coefficients w₀ and w₁. For our data set, there can be infinitely many choices of these parameters. So our task is to find the best choice of parameters and we have to know how to measure the quality of the parameters or measure the quality of the fit. So in particular, we define a loss function (also called a cost function), which measures how good or bad a particular choice of w₀ and w₁ is. Values of w₀ and w₁ that seem poor should result in a large value of the loss function, whereas good values of w₀ and w₁ should result in small values of the loss function. So what’s the cost of using a specific line? It has many forms. But the one we’re gonna talk about here is Residual Sum of Squares (RSS):

what the Residual sum of squares assumes is that we’re just gonna add up the errors we made between what we believe the relationship is or what we’ve estimated the relationship to be between x and y and what the actual observation y is. And, we talked about the error as the εᵢ.

Model Optimization

Our cost was to find the residual sum of squares, and for any given line, we can compute the cost of that line. So for example, we have two different lines for two different residual sums of squares. How do we know which choice of parameters is better? Ones with the minimum RSS.

Two different fitted line of the data set for comparison. One is from Linear Regression and another is from polynomial regression.

Our goal is to minimize overall possible w₀ and w₁ intercepts and slopes respectively, but a question is, how are we going to do this? The mathematical notation for this minimization over all possible w₀, w₁ is

So we want to find the specific value of w₀ and w₁ we’ll call that ŵ₀ and ŵ₁ respectively that minimize this residual sum of squares.

The red dot marked below on the above plot shows where the desired minimum is. We need an algorithm to find this minimum. We will discuss two approaches e.g. Closed-form Solution and Gradient Descent.

Closed-form Solution for Linear Regression

From calculus, we know that at the minimum the derivatives will be 0. So, if we compute the gradient of our RSS:

Take this gradient, set it equal to zero, and find the estimates for w₀, w₁. Those are gonna be the estimates of our two parameters of our model that define our fitted line.

implies,

Solving for w₀ and w₁ we get,

Now that we have the solutions, we just have to compute ŵ₁ and then plug that in and compute ŵ₀. To compute ŵ₁ we need to compute a couple of terms e.g. sum over all of our observations ∑yᵢ and sum over all of our inputs ∑xᵢ and then a few other terms that are multipliers of our input and output ∑yᵢxᵢ and ∑xᵢ². Plug them into these equations and we get out what our optimal ŵ₀ and ŵ₁ are, that minimize our residual sum of squares.

Gradient Descent for Linear Regression

The other approach that we will discuss is Gradient descent where we’re walking down this surface of residual sum of squares trying to get to the minimum. Of course, we might overshoot it and go back and forth but that’s a general idea that we’re doing this iterative procedure.

Then our gradient descent algorithm will be:

So gradient descent does this, we’re going to repeatedly update our weights. So set W to W minus η times the derivative, where W is the vector. We will repeatedly do that until the algorithm converges. η here is the learning rate and controls how big a step we take on each iteration of gradient descent and the derivative quantity is basically the update or the change we want to make to the parameters W.

Prediction for Linear Regression

After all the hard work now we need to test our machine learning model. The dataset we work on, generally split into two parts. One part is called training data where we do all the training and another is called the test data where we test our network. We have developed equations for training and using them we have got a calibrated set of weights. We will then use this set of weights to predict the result for our new data using the equation

where ŵ₀ and ŵ₁ are the optimized set weights.

Now that we have finished the theoretical part of the tutorial now you can see the code and try to understand different blocks of the code.

Originally published at Gadictos.

--

--