Machine Learning Week 1: Linear regression and Cost function

Rachhek Shrestha
ML Notes
Published in
2 min readDec 8, 2016

Things are starting to get a little mathematical from today and subject of linear algebra is catching up.

Few term definitions:

  1. Training Set: It is the set of data that is used to train the algorithm. It is fed to the algorithm. For example in a graph, the set of points (x,y) is a training set.
  2. Input variable: It is the value (x) also called input feature that we give to the machine to find out the predicted value.
  3. Output variable : Is the value(y) also called ‘target variable’ that is to be predicted from the input.
  4. Hypothesis: It is the function that is used to predict the target variable from the input variable. Given a training set, out goal is to learn a function h : X → Y so that h(x) is a “good” predictor for the corresponding value of y.
  5. Linear regression: It means that the function is a linear function of the form y=c + mx and our output value we are trying to predict is of continuous type(real valued predictions)
  6. Cost Function: It is the function that we use to predict the accuracy of our prediction by running our function through all of the training set and then gaining the average of it. The function is given below. This function is otherwise called the “Squared error function”, or “Mean squared error”.

--

--