Linear Regression using PyTorch

Learner1067
Analytics Vidhya
Published in
4 min readJun 18, 2020

Mathematically linear equation of one variable can be defined as below.

y = mx+c

In order to correlate it with real world example let assume Small company wants to predict number of 25 L drinking water bottles needed in month based on month’s average temperature. They want to establish pattern between number of bottles and average temperature of the month [ To keep it simple lets assume employees count is fixed] . The company wants to use their historical data to build model which can help them achieve it.

Example of data set.

Month's Avg Temperature   Number of Bottles consumed in month
35 4
37 8

Lets try to relate it with equation.

Temperature is x  and y is bottles now we have to find m and c. so that when we put x in below equation it should help with prediction of number of bottlesy = mx+c

In this article I will focus on implementation using Pytorch for mathematical details please refer my previous post. https://www.linkedin.com/feed/update/urn:li:activity:6648611139209003008/ or https://medium.com/@nirmal1067/linear-regression-from-scratch-in-java-dc7aead2ba04

Prerequisites : PyTorch environment setup must be there.

Create input tensor X , Actual output Y using below snippet. Here E represent error to make output non linear

X = torch.linspace(0,50,50).reshape(-1,1)
torch.manual_seed(50)
E = torch.randint(-8,9,(50,1),dtype=torch.float)
Y = 5*X + 8 + E

Below is code for LinearRegression. The indenting is getting messed up while copying code.

class LinearRegressionModel(nn.Module):

#optimizer

def __init__(self,in_Feature=1,out_Feature=1,learningRate=0.001):
super().__init__()
self.linear= nn.Linear(in_Feature,out_Feature)
self.optimizer = torch.optim.SGD(self.parameters(), lr = 0.001)
self.criterion = nn.MSELoss()

def predict(self,inputs):
predict = self.linear(inputs)
return predict

def printClassValues(self):
for name, param in self.named_parameters():
print(name, '\t', param.item())

def trainModel(self,X,y,epochs=50):
#epochs = 100
losses = []
for i in range(epochs):
i+=1
y_pred = self.predict(X)
loss = self.criterion(y_pred, y)
losses.append(loss)
print(f'epoch: {i:2} loss: {loss.item():10.8f} weight: {self.linear.weight.item():10.8f} \
bias: {self.linear.bias.item():10.8f}')
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()

Inorder to test code use below. This will initialize your Model with default bias and weight.

lgModel = LinearRegressionModel(1,1)
lgModel.printClassValues()

Below line of code will train your model.

lgModel.trainModel(X,Y)

While training you keep eye on how weight and bias are changing . Below is training log for my Model.

epoch:  1  loss: 24908.03906250  weight: -0.24355280              bias: 0.74757409
epoch: 2 loss: 11715.38183594 weight: 8.90680599 bias: 1.02333665
epoch: 3 loss: 5522.17285156 weight: 2.63715982 bias: 0.84102964
epoch: 4 loss: 2614.80932617 weight: 6.93266582 bias: 0.97256958
epoch: 5 loss: 1249.96154785 weight: 3.98936534 bias: 0.88907117
epoch: 6 loss: 609.23602295 weight: 6.00579691 bias: 0.95290476
epoch: 7 loss: 308.44332886 weight: 4.62402439 bias: 0.91578913
epoch: 8 loss: 167.23051453 weight: 5.57056141 bias: 0.94783634
epoch: 9 loss: 100.93089294 weight: 4.92183685 bias: 0.93249261
epoch: 10 loss: 69.79891968 weight: 5.36611938 bias: 0.94961578
epoch: 11 loss: 55.17618179 weight: 5.06151915 bias: 0.94449055
epoch: 12 loss: 48.30352783 weight: 5.27002287 bias: 0.95460564
epoch: 13 loss: 45.06904221 weight: 5.12696838 bias: 0.95427531
epoch: 14 loss: 43.54253006 weight: 5.22478771 bias: 0.96109837
epoch: 15 loss: 42.81778717 weight: 5.15756941 bias: 0.96301681
epoch: 16 loss: 42.46943665 weight: 5.20342922 bias: 0.96829230
epoch: 17 loss: 42.29781723 weight: 5.17181253 bias: 0.97126424
epoch: 18 loss: 42.20915604 weight: 5.19327927 bias: 0.97581106
epoch: 19 loss: 42.15945053 weight: 5.17837572 bias: 0.97927547
epoch: 20 loss: 42.12804413 weight: 5.18839121 bias: 0.98347813
epoch: 21 loss: 42.10523224 weight: 5.18133450 bias: 0.98717159
epoch: 22 loss: 42.08646393 weight: 5.18597412 bias: 0.99121052
epoch: 23 loss: 42.06961060 weight: 5.18260002 bias: 0.99500942
epoch: 24 loss: 42.05364609 weight: 5.18471670 bias: 0.99896938
epoch: 25 loss: 42.03814316 weight: 5.18307209 bias: 1.00281560
epoch: 26 loss: 42.02280426 weight: 5.18400383 bias: 1.00673640
epoch: 27 loss: 42.00761795 weight: 5.18317080 bias: 1.01060271
epoch: 28 loss: 41.99245834 weight: 5.18354702 bias: 1.01450300
epoch: 29 loss: 41.97735596 weight: 5.18309498 bias: 1.01837671
epoch: 30 loss: 41.96227264 weight: 5.18321037 bias: 1.02226520
epoch: 31 loss: 41.94721603 weight: 5.18293667 bias: 1.02614009
epoch: 32 loss: 41.93214798 weight: 5.18293047 bias: 1.03002095
epoch: 33 loss: 41.91712952 weight: 5.18274069 bias: 1.03389442
epoch: 34 loss: 41.90211487 weight: 5.18267632 bias: 1.03776956
epoch: 35 loss: 41.88711548 weight: 5.18252659 bias: 1.04164016
epoch: 36 loss: 41.87213898 weight: 5.18243551 bias: 1.04551053
epoch: 37 loss: 41.85715485 weight: 5.18230391 bias: 1.04937768
epoch: 38 loss: 41.84220123 weight: 5.18220091 bias: 1.05324376
epoch: 39 loss: 41.82727432 weight: 5.18207836 bias: 1.05710721
epoch: 40 loss: 41.81235123 weight: 5.18196821 bias: 1.06096911
epoch: 41 loss: 41.79744339 weight: 5.18185043 bias: 1.06482875
epoch: 42 loss: 41.78254318 weight: 5.18173838 bias: 1.06868660
epoch: 43 loss: 41.76766968 weight: 5.18162251 bias: 1.07254231
epoch: 44 loss: 41.75281525 weight: 5.18150854 bias: 1.07639611
epoch: 45 loss: 41.73797607 weight: 5.18139362 bias: 1.08024788
epoch: 46 loss: 41.72313309 weight: 5.18127966 bias: 1.08409774
epoch: 47 loss: 41.70832825 weight: 5.18116522 bias: 1.08794558
epoch: 48 loss: 41.69353104 weight: 5.18105078 bias: 1.09179139
epoch: 49 loss: 41.67874527 weight: 5.18093681 bias: 1.09563529
epoch: 50 loss: 41.66396332 weight: 5.18082237 bias: 1.09947717

Once Model is trained use below snippet to generate output predicted output “O” and compare it graphically using matplotlib

O = X * lgModel.linear.weight + lgModel.linear.bias
plt.scatter(X.detach().numpy(),Y.numpy())
plt.plot(X.detach().numpy(),O.detach().numpy(),’r’)

Below is graph for my Model.

--

--