Introduction to Linear Regression

What is Linear Regression?

Linear regression is a supervised machine learning approach for modeling a linear relationship between a dependent variable and one or more independent variables. In very simple words, it is an approach to create a straight line or a model from discrete values and generate an output which is continuous.

let’s take an example sample training data set to predict the house prices.Size in Feet(x) and Price($ in 1000s) (Y)

Linear Regression line is a straight line going through all points(may or may not overlap with points).The equation of a straight line is y=mx + c where m is the slope and c are the constant.In the linear regression model, slope will become the Weight and the constant will act as bias. The basic model will be ho=Weight*x + bias. where ,
W is the weight
b is the bias 
x is "input" variable or features 
y is "output" Variable or "target" variable
let’s build a program in tensorflow.

import numpy as np
import tensorflow as tf
# declaring and initializing Weights and bias
Weight=tf.Variable([.3], dtype=tf.float32)
bias=tf.Variable([-.3], dtype=tf.float32)
# defining x and y parameters and making it placeholder since at first we don't know it's value
linear_model=Weight * x + bias

Now, we need to predict Weight and bias. To predict the Weight and bias, we will use the training data.

Cost Function

A cost function helps us to fit the best straight line to our data.It is also called Squared error function.It is denoted by J, J(Weight,bias)


We can measure the accuracy of our hypothesis function by using a cost function. This takes average difference of all results of the hypothesis with inputs from x's and the actual output y's
ho(xi)ho(xi) is the predicted value and
yiyi is the actual value.
therefor this function is called "Squared error function" or "mean squared error". The mean is halved (12)(12) as a convenience for the computation of the gradient descent, as the derivative term of the squared function will cancel out (12)(12) term.
Now the objective is to minimize this to find accurate value of Weight and bias. In order to achieve all this in tensorflow, we can use tensorflow library functions

# In order to calulate loss
loss=tf.reduce_sum(tf.square(linear_model - y)) # calculates sum of the squares
# optimizer

tf.reduce_sum(input_tensor) : This function adds the elements across dimensions of a tensor.So, it acts as summation ∑∑
tf.train.GradientDescentOptimizer() : This function is an optimizer which uses the gradient descent algorithm.Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function.

Training data and Training loop

# training values
# training
init=tf.global_variables_initializer() # To initialize all Variable
sess=tf.Session() # reset values to wrong
# loop
for i in range(1000):,{x:x_trainingData, y:y_trainingData})

while training we have to feed all training data repeat this task

Calculate training accuracy

curr_Weight, curr_bias,[Weight, bias, loss],{x:x_trainingData, y:y_trainingData})
print("W: %s b: %s loss: %s"%(curr_Weight, curr_bias, curr_loss))
We need top get weight, bias and loss, and print accordinly