Machine learning (Part 23)-Hypothesis Representation of Logistic Regression

Coursesteach
6 min readFeb 5, 2024

📚Chapter: 5 -Logistic Regression

Introduction

Let’s start talking about logistic regression. In this tutorial, I’d like to show you the hypothesis representation, that is, what is the function we’re going to use to represent our hypothesis where we have a classification problem.

Sections

Logistic Regression model
Interpretation of Logistic regression model

Section 1-Logistic Regression model

Earlier, we said that we would like our classifier to output values that are between zero and one. So, we like to come up with a hypothesis that satisfies this property, that these predictions are maybe between zero and one. When we were using linear regression, this was the form of a hypothesis, where H of X is theta transpose X. For logistic regression,

I’m going to modify this a little bit, and make the hypothesis G of theta transpose X, where I’m going to define the function G as follows: G of Z if Z is a real number is equal to one over one plus E to the negative Z. This called the sigmoid function or the logistic function. And the term logistic function, that’s what give rise to the name logistic progression. And, by the way, the terms sigmoid function and logistic function are basically synonyms and mean the same thing. So the two terms are basically interchangeable and either term can be used to refer to this function G.

And if we take these two equations, and put them together, then here’s just an alternative way of writing out the form of my hypothesis. I’m saying that H of x is one over one plus E to the negative theta transpose X, and all I’ve done is I’ve taken the variable Z, Z here’s a real number and plugged in theta transpose X, so I end up with, you know, theta transpose X, in place of Z there.

Lastly, let me show you where the sigmoid function looks like. We’re going to plot it on this figure here. The sigmoid function, G of Z, also called the logistic function, looks like this. It starts off near zero and then rises until it processes 0.5 at the origin and then it flattens out again like so. So that’s what the sigmoid function looks like. And you notice that the sigmoid function, well, it asymptotes at one, and asymptotes at zero as Z against the horizontal axis is Z. As Z goes to minus infinity, G of Z approaches zero and as G of Z approaches infinity, G of Z approaches 1, and so because G of Z offers values that are between 0 and 1 we also have that H of X must be between 0 and 1.

Finally, given this hypothesis representation, what we need to do, as before, is fit the parameters theta to our data. So given a training set, we need to pick a value for the parameters theta and this hypothesis will then let us make predictions. We’ll talk about a learning algorithm later for fitting the parameters theta.

Section 2- Interpretation of Logistic regression model

But first let’s talk a bit about the interpretation of this model. Here’s how I’m going to interpret the output of my hypothesis H of X. When my hypothesis outputs some number, I am going to treat that number as the estimated probability that Y is equal to one on a new input example X.

Here is what I mean. Here is an example. Let’s say we’re using the tumor classification example. So we may have a feature vector X, which is this x01 as always and then our one feature is the size of the tumor. Suppose I have a patient come in and, you know they have some tumor size and I feed their feature vector X into my hypothesis and suppose my hypothesis outputs the number 0.7. I’m going to interpret my hypothesis as follows. I’m going to say that this hypothesis is telling me that for a patient with features X, the probability that Y equals one is 0 .7. In other words, I’m going to tell my patient that the tumor, sadly, has a 70% chance or a 0.7 chance of being malignant.

To write this out slightly more formally or to write this out in math, I’m going to interpret my hypothesis output as P of y equals 1, given X parametrized by theta. So, for those of you that are familiar with probability, this equation might make sense, if you’re a little less familiar with probability, you know, here’s how I read this expression, this is the probability that y is equals to one, given x instead of given that my patient has, you know, features X. Given my patient has a particular tumor size represented by my features X, and this probability is parametrized by theta. So I’m basically going to count on my hypothesis to give me estimates of the probability that Y is equal to 1

. Now since this is a classification task, we know that Y must be either zero or one, right? Those are the only two values that Y could possibly take on, either in the training set or for new patients that may walk into my office or into the doctor’s office in the future. So given H of X, we can therefore compute the probability that Y is equal to zero as well. Concretely, because Y must be either zero or one, we know that the probability of Y equals zero, plus the probability of Y equals one, must add up to one. This first equation looks a little bit more complicated but it’s basically saying that probability of Y equals zero for a particular patient with features x, and you know, given our parameter’s data, plus the probability of Y equals one for that same patient which features x and you parameters theta must add up to one, if this equation looks a little bit complicated feel free to mentally imagine it without that X and theta. And this is just saying that the probability of Y equals zero plus the probability of Y equals one must be equal to one. And we know this to be true because Y has to be either zero or one. And so the chance of Y being zero plus the chance that Y is one, you know, those two must add up to one. And so if you just take this term and move it to the right-hand side, then you end up with this equation that says probability Y equals zero is one minus probability y equals and thus if our hypothesis if H of X gives us that term you can therefore quite simply compute the probability, or compute the estimated probability that Y is equal to zero as well. So you now know what the hypothesis representation is for logistic regression and we’re seeing what the mathematical formula is defining the hypothesis for logistic regression. In the next tutrial, I’d like to try to give you better intuition about what the hypothesis function looks like. And I want to tell you something called the decision boundary and we’ll look at some visualizations together to try to get a better sense of what this hypothesis function of logistic regression really looks like.

Please Follow and 👏 Clap for the story courses teach to see latest updates on this story

If you want to learn more about these topics: Python, Machine Learning Data Science, Statistic For Machine learning, Linear Algebra for Machine learning Computer Vision and Research

Then Login and Enroll in Coursesteach to get fantastic content in the data field.

Stay tuned for our upcoming articles because we research end to end , where we will explore specific topics related to Machine Learning in more detail!

Remember, learning is a continuous process. So keep learning and keep creating and Sharing with others!💻✌️

Note:if you are a Machine Learning export and have some good suggestions to improve this blog to share, you write comments and contribute.

if you need more update about Machine Learning and want to contribute then following and enroll in following

👉Course: Machine Learning (ML)

👉📚GitHub Repository

👉 📝Notebook

Do you want to get into data science and AI and need help figuring out how? I can offer you research supervision and long-term career mentoring.
Skype: themushtaq48, email:mushtaqmsit@gmail.com

Contribution: We would love your help in making coursesteach community even better! If you want to contribute in some courses , or if you have any suggestions for improvement in any coursesteach content, feel free to contact and follow.

Together, let’s make this the best AI learning Community! 🚀

👉WhatsApp

👉 Facebook

👉Github

👉LinkedIn

👉Youtube

👉Twitter

Source

1- Machine Learning — Andrew

--

--