Machine Learning 101 Linear regression from scratch
Implementation of Linear regression from sklearn is pretty damn easy, It’s just two lines of code but ever wondered how that really works?
By that I mean, How does the two theta value changes and their result is seen in the gradient descent and also the loss function? we will see the same in this story , I would recommend you to catch-up with linear regression so that nothing bounces off!
Let’s see how it can be written:
What about loss function? Here we have used Mean Squared Error:
We ran a loop for all our data and calculated the loss for each theta we got from our main gradient descent function (Don’t worry its down)
What about theta1 and theta2 ? how will they change for them to change? We have to gradient and in this case we have to find the derivatives of the cost functions with respect to theta0 and theta 1 , in the snap below the two arrows denote the same:
Let’s check out the main function which will find us the both the theta’s on the basis of learning rate and will also store the loss for us
Looks like our small program is working just fine. Watch that dip! we got to 190!
Accuracy!!
Let’s plot the results, the prediction with our previous data and see how good our results are!
Do check out my next post on Multivariate regression