Logistic regression is a classification algorithm used to assign observations to a discrete set of classes. Unlike linear regression which outputs continuous number values, logistic regression transforms its output using the logistic sigmoid function to return a probability value which can then be mapped to two or more discrete classes.

For Example:

Logistic Regression | Image by Author
Logistic Regression | Image by Author
Logistic Regression | Image by Author

Given data on time spent studying and exam scores. Logistic Regression could help use predict whether the student passed or failed. 1 for passed 0 for fail.


In Machine Learning, a hyperparameter is a parameter whose value is used to control the learning process.

Hyperparameters can be classified as model hyperparameters, which cannot be inferred while fitting the machine to the training set because they refer to the model selection task, or algorithm hyperparameters, that in principle have no influence on the performance of the model but affect the speed and quality of the learning process. An example of a model hyperparameter is the topology and size of a neural network. Examples of algorithm hyperparameters are learning rate and mini-batch size.

Different model training algorithms require different hyperparameters, some simple algorithms (such as ordinary least squares regression) require none. Given these hyperparameters, the training algorithm learns the parameters from the data. For instance, LASSO is an algorithm that adds a regularization hyperparameter to ordinary least squares regression, which has to be set before estimating the parameters through the training algorithm. …


Image for post
Image for post
Visual represent of the prediction line

Linear regression is a common machine learning technique that predicts a real-valued output using a weighted linear combination of one or more input values.

The “learning” part of linear regression is to figure out a set of weights w1, w2, w3, … w_n, b that leads to good predictions. This is done by looking at lots of examples one by one (or in batches) and adjusting the weights slightly each time to make better predictions, using an optimisation technique called Gradient Descent.

Let’s create some sample data with one feature “x” and one dependent variable “y”. We’ll assume that “y” is a linear function of “x”, with some noise added to account for features we haven’t considered here. …


Image for post
Image for post

In this notebook we will predict the best quality of the wine using PyTorch and linear regression. If you haven't checked out my previous blog on Linear Regression check this out .

Lets see what we are dealing with…

First of all lets import required libraries..

Lets load our dataset.

You can download the dataset from this link .

Now lets analyse our dataset.. its important to analyse to see what we are dealing with..

Lets separate our input and output vectors columns

now initialise input and output vectors

Its really important to convert our datatype of the vectors to the float… because the PyTorch always takes the float type tensors…


This is quite similar to the simple linear regression model or Uni-Variate linear regression model (Click here if you haven't checked my blog on Uni-variate linear regression) , but with multiple independent variables contributing to the dependent variable and hence multiple coefficients to determine and complex computation due to the added variables.

In a simple linear regression model, a dependent variable guided by a single independent variable is a good start but of very less use in real world scenarios. Generally one dependent variable depends on multiple factors. For example, the rent of a house depends on many factors like the neighbourhood it is in, size of it, no.of rooms, attached facilities, distance of nearest station from it, distance of nearest shopping area from it, etc. How do we deal with such scenarios? …


Uni-variate linear regression focuses on determining relationship between one independent (explanatory variable) variable and one dependent variable.

In this we will implement prediction of a student scoring in an exam on the behalf of how much time did it spend in studying. We will optimise our parameters using gradient descent method, and calculate the loss with Mean Squared Error.

Here we will use student_score.csv as our Datasets.

First lets import all the necessary python libraries….

As you can see our data-frame contains 2 columns in which Hours in feature and Scores are the target. …

About

Rishabh Roy

Data Analytics | Solving the unsolvable with ML, deep learning. Breaking down barriers.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store