Visualizing Linear Regression

Rishabh Roy
The Startup
Published in
2 min readJul 29, 2020
Visual represent of the prediction line

Linear regression is a common machine learning technique that predicts a real-valued output using a weighted linear combination of one or more input values.

The “learning” part of linear regression is to figure out a set of weights w1, w2, w3, … w_n, b that leads to good predictions. This is done by looking at lots of examples one by one (or in batches) and adjusting the weights slightly each time to make better predictions, using an optimisation technique called Gradient Descent.

Let’s create some sample data with one feature “x” and one dependent variable “y”. We’ll assume that “y” is a linear function of “x”, with some noise added to account for features we haven’t considered here. Here’s how we generate the data points, or samples:

And here’s what it looks like visually:

Now we can define and instantiate a linear regression model in PyTorch:

Loss function

Machines learn by means of a loss function. It’s a method of evaluating how well specific algorithm models the given data. Here we will use Mean Square Error

We will use Stochastic Gradient Descent as our optimiser.

Finally… now we will train our model and visualise our linear regression model being trained.

That’s it! It takes about 310 epochs for the model to come quite close to the best fit line. The complete code for this post can be found in this Jupyter notebook:

Reference

Gradient Descent: https://en.wikipedia.org/wiki/Gradient_descent

--

--