Learn Probability and Statistics for AI and ML with Ease ! (Part-2)

Continuing Our Journey: Where We Left Off…

Varsha Pawar
10 min readMay 28, 2024

Welcome back, everyone, to our fun and exciting learning journey! We’re diving deeper into the secrets of Probability and Statistics for AI/ML, using real-world scenarios to make these concepts come alive. This is a continuation of Part 1, so if you missed anything, be sure to go back and catch up on what we’ve covered so far. Let’s continue unraveling these fascinating topics together!

1] Clustering:

Suppose you have a jar full of toys and you want to sort them into similar groups. For example, you will separate all the car toys into one group, animal toys into another group, and fruit toys into a separate group.

Photo by cottonbro studio on Pexels

This process of grouping items or toys based on their similarities is called clustering.

Example: K- means Clustering

Use Case: grouping all the similar elements in one set is essential for segmenting customers with similar interest for targeted marketing by recommending them items based on similarities.

2] Classification:

Imagine we have lots of photos of dogs and cats, the process of separating out dogs photos from cats photo is called classification.

Photo by Tamhasip Khan on Pexels

Example: Logistic regression, Decision Tree classification and Support vector classifier.

Use Case: Assigning classes labels to the instances based on their features, such as classifying emails as spam or not spam.

3] Regression Analysis:

Let’s continue with toys example, now as we have different types of toys.

Photo by omerderinyar on Pexels

If we try to sort the toys based on their size or height we will be arranging them from smaller to larger sizes. Using this we can predict that what will be size of next new toy based on the sizes of the toys we have already seen.

Example: Linear regression, Decision Tree Regressor, SVR.

Use Case: Predicting a continues target variable based on the input features like size, location, no. of bedrooms

4] Hypothesis Testing:

It’s like playing detective.

Photo by cottonbro studio on Pexels

Suppose, I’m taking out few toys from each box and count them. Based on the count I’m trying to figure out if there’s enough evidence to say that one box really has more toys than the other.

Hypothesis testing is a statistical method used to determine if there is enough evidence to support or reject a claim about a population parameter based on sample data. It involves formulating a null hypothesis and an alternative hypothesis, conducting a statistical test, and interpreting the results to make an inference about the population.

Example: Chi-Square test, T-test, Z- test, ANOVA.

Use Case: Use to determine if there is significant difference between two groups based on their mean and std deviation.

5] Measure of Central Tendency:

It’s like the guiding stars, this stars helps us to find our path in night sky.

Photo by Rachel Claire on Pexels

6] Measure of dispersion:

It’s like the waves of oceans. That shows how spread and “bumpy” our data is.

Photo by Sascha Thiele on Pexels

7] Bayesian Inference:

Suppose we have eight magical balls that can predict the weather.

Bayesian inference helps us update our belief based on new information, such as looking outside the window to see if it is cloudy or sunny.

Photo by cottonbro studio on Pexels

Example: Bayesian Network

Use case: Modeling uncertain relation between the variables and making predictions based on the prior knowledge and observed evidence.

8] Probability Density Function (PDF):

Think of a PDF as a recipe for making cookies. It tells us what and how much sugar we need, what quantity of butter should be taken and flour quantity.

Similarly PDF tells us how likely it is to find different values in the dataset.

Photo by Taryn Elliott on Pexels

Let’s take an example, suppose we have a big box of balls, and every ball has a number on it which says how many of candies we have eaten. PDF is like a magical recipe book, it tells us how likely it is for us to pick up a certain number. It’s like saying “Hmm there are lot of balls around number 5, so it’s more likely to pick up a number close to 5”

Example: Kernel Density Estimation (KDE)

Use case: Estimating the underlying prob distribution of a dataset, useful in anomaly detection and generative modeling.

9] Cumulative Distribution Function (CDF):

Let’s imagine you’re playing a dice game where you roll the dice and add up the numbers each time.

The cumulative distribution function (CDF) keeps track of how many times you’ve rolled the dice and what the total score is.

Photo by lil artsy on Pexels

For example, if you rolled the dice 3 times and got a total score of 10, the CDF will tell you how many times the total score has been less than 10.

PDF tells us how likely it is to pick up a certain number whereas CDF helps us to keep the track of the number we have seen and how many times we have seen them.

It is like PDF works as a magic book and CDF works as a score board which helps us to understand the numbers and game better.

10] Markov Chains:

Let’s imagine we’re playing a game where we move from one room to another. So, if you’re in the living room, you can go to the kitchen, bathroom, or bedroom. Where you go next just depends on where you are at the moment, not where you’ve been before.

That’s essentially what a Markov chain is; it tells you the next step to take based on your current state or step.

Photo from Pixabay

So, if you are in the living room, the Markov chain provides the probabilities of you moving to the kitchen, or the bedroom, or the bathroom next. If you then move to the kitchen, the Markov chain gives the probabilities of you choosing to go to the living room, the bedroom, or the bathroom from there, and so on.

Photo by Satoshi Hirayama on Pexels
  • Example: Hidden Markov Models (HMM)
  • Use Case: Modelling sequential data where the future state depends only on the current state, such as speech recognition and part-of-speech tagging.

11] SVD:

SVD is like dividing a big picture or task into smaller, manageable tasks. Let’s take the example of solving a complex puzzle or assembling a car; SVD does this by dividing it into three parts:

i] First part (U): These pieces show the main shapes or patterns in the puzzle.

Photo by Mike Bird on Pexels

It’s like finding the most important features or directions in the data, similar to identifying the most important parts for building a car, such as the engine, gears, and brakes. The direction would indicate fitting these parts into the front section of the car (bonnet).

ii] Second part (S): These pieces tell you how significant each of these main shapes or patterns is. It’s like ranking them from most important to least important.

Photo by Anastasia Shuraeva on Pexels

For the car, this would mean assembling the engine, gearbox, and brakes first. Next, we would arrange the steering, wheels, airbags, and seats. Finally, we would handle less critical tasks like putting covers on seats, installing window shields, and painting the car. This organizes all the parts in a sequence based on their priority.

iii] Third part (V): This explains how each piece fits into the main shapes or patterns. It’s like understanding how each part of our data contributes to forming the big picture.

Photo by Gustavo Fring on Pexels

For the car, this involves fitting the engine in the front and arranging the brake, clutch, and accelerator side by side. This shows how all these parts work together to form the whole car (the so-called “big picture”).

Now, how is this useful?

  • Dimensionality Reduction: You can use SVD to reduce the dimensionality of your data by keeping only the most important patterns or directions (determined by the singular values). This can help simplify complex datasets while preserving important information.
  • Data Compression: SVD can also be used for data compression. By keeping only the most significant patterns (with the largest singular values), you can represent the original data with fewer numbers, saving storage space.
  • Noise Reduction: SVD can help remove noise from data by focusing on the most significant patterns and filtering out the less important ones.

In summary, consider SVD’s work as breaking down a complex dataset into simpler, more manageable parts. This allows you to understand its underlying structure, helps in reducing its dimensionality, compressing it efficiently, and even removing noise.

12] Entropy :

Let’s imagine you’re playing a guessing game where you have to guess the color of a ball in a box.

Entropy is like a measure of how uncertain or messy the box is.

Photo by Cleyton Ewerton on Pexels
  • Low Entropy: If all the balls in the box are the same color, like all red, then there’s low entropy because it’s very certain and organized. You don’t have to guess much.
  • High Entropy: But if the balls are all different colors, like red, green, blue, etc., then there’s high entropy because it’s very messy and uncertain. You have to guess a lot more.

13] Information Gain:

Let’s continue with the same example. Imagine you’re trying to figure out the best question to ask to guess the color of the ball.

Information gain is like finding the question that reduces the uncertainty the most.

Photo by Laura James on Pexels
  • High Information Gain: If you ask a question like “Is the ball red?” and most of the balls are red, then you’ve reduced the uncertainty a lot. You’ve gained a lot of information.
  • Low Information Gain: But if you ask a question like “Is the ball a primary color?”, it doesn’t help much because it doesn’t reduce the uncertainty significantly. You haven’t gained much information.”

So, inshort Entropy is all about how messy or uncertain things are, and information gain is about finding the best questions to reduce that uncertainty.

14] Cross-Entropy:

Imagine you have two boxes filled with balls of different colors. One box represents your guesses, and the other contains the actual colors of the balls.

Cross-entropy measures the difference between your guesses and the actual colors.

Photo by Goda Morgan on Pexels

If your guesses perfectly match the actual colors, the cross-entropy is low because there’s little difference. However, if your guesses are far from the actual colors, the cross-entropy is high due to the significant difference.

15] KL-Divergence (Kullback-Leibler Divergence):

Now, imagine you’re comparing two different guessing strategies.

KL-divergence measures how much one strategy diverges from the other.

Photo by Kaboompics .com on Pexels

If the two strategies produce very similar guesses, the KL-divergence is low because there’s little difference between them. However, if the two strategies yield very different guesses, the KL-divergence is high due to the significant difference between them.

In summary, cross-entropy measures the difference between your guesses and the actual outcomes, while KL-divergence measures the difference between two different sets of guesses.

16] Mutual Information:

Imagine you have two friends, Alice and Bob, who are playing a guessing game with colored balls. Alice has a box of balls in different colors (such as red, blue, and green), and Bob has another box with balls in the same colors (red, blue, and green).

Photo by Ksenia Chernaya on Pexels

Alice and Bob take turns guessing the color of a randomly chosen ball from their respective boxes. After each round of guessing, they compare their guesses to see how well they did.

Mutual information measures how much information Alice’s guess provides Bob about the color of the ball, and vice versa.

For example, if Alice guesses ‘red’ and Bob also guesses ‘red’, their guesses agree perfectly, and mutual information is high because Bob’s guess gives a lot of information about Alice’s guess. But if Alice guesses ‘red’ and Bob guesses ‘blue’, their guesses don’t agree, and mutual information is low because Bob’s guess doesn’t provide much information about Alice’s guess.

In summary, mutual information quantifies the amount of information that two random variables share with each other. It’s a measure of how much knowing one variable tells you about the other.

Conclusion:

I hope you enjoyed this simple and easy journey through the concepts of probability and statistics!

If you found this blog helpful or interesting, please give it a clap or leave a comment below. Your feedback is valuable and helps me improve. Feel free to share your thoughts, ask questions, or suggest topics for future posts. Let’s keep the conversation going!

--

--

Varsha Pawar
0 Followers

AI/ML enthusiast, passionate innovator & lifelong learner, driven by curiosity & creativity.