INSIDE AI

Linear Regression: Hypothesis Function, Cost Function, and Gradient Descent. (Part:2)

Maths and Theory Behind the most famous Supervised Learning Technique

Mahyar Ali
Analytics Vidhya
Published in
8 min readFeb 16, 2020

--

[Note: Before reading this article, I encourage you to read the first part of this article(Link) to gain a better understanding of Hypothesis Function, Cost Function.]

Recap:-

What we did in the previous article was to have a thorough study of the theory behind the cost function. In this article, we will try to minimize the cost and fit the parameters into the cost function.

Minimizing the Cost:-

Now let’s move onto how our program will find the best fit. What I will do is generate arbitrary values of theta0 and theta1 and then put them in cost function and calculate the cost for each pair [theta0 and theta1]. Now I have a different table with three columns,theta0,theta1, and Cost. If I plot that values on a 3-D plotting graph, I get the following shape.

--

--