Machine Learning #2 — Supervised and Unsupervised Learning

Welcome back to my series as I explore the realms of Machine Learning. So in this post, I’ll just carry on from where I left off in the last post, so in case you haven’t caught that please do feel free to check it out.

So, let’s get crunching.

In my last post, I outlined how Machine Learning itself branches off into two big arms — Supervised Learning, and Unsupervised Learning. So what does that mean?

Supervised vs. Unsupervised — What’s the Difference?

At first, this concept may be slightly tricky to fully grasp, but in essence the distinction between supervised learning, and unsupervised learning, lies with how supervised learning works with existing datasets, and the machine is trained to produce the desired outputs. On the other hand, unsupervised learning, there are no existing datasets.

To unfold this notion further, in supervised learning, you give the machine a set of ‘right’ answers, which is data we know is correct. So, going back to my example of tumor detection, you are telling the computer that for a size of say 2mm, the tumor is benign.

From this, you’re telling the computer what the ‘correct’ answers are (e.g. that a tumor size of x mm corresponds to being either benign or malignant). Therefore, as the computer analyzes and trains from this data, it works to replicate that ‘correctness’. So the tumor detection problem which was highlighted in the last post outlines an instance of supervised learning, where data sets and their desired outputs were fed in as ‘evidence’, and were employed the sharpen and train the machine.

A diagram I pulled off from a lecture by Prof. Andrew Ng. on Machine Learning

Over on the other side of the fence, unsupervised learning features no datasets. You simply throw in a scatter of data, and the algorithms would cluster the points into groups. These data plots would have no inferences or labels, no indication what is right, and what is wrong. From that, unsupervised learning algorithms will scan over the data, draw up clusters that it recognizes, and divide them up so.

While at first it may seem as though these clustering algorithms have no real world implications, its reaches into the real world are truly astonishing. These algorithms scan through the markets and determine segmentations in it. Government planners can feed in information regarding a population, and the clustering unsupervised algorithm can point out groups and bulks of the demographics (e.g. separating the wealthy from the poor).

A very famous problem that is built upon the foundations of unsupervised learning is the Cocktail problem. Imagine you're in a cocktail party. Hordes of people around. The air, polluted with noise flying in from all directions. Can we, through Machine Learning, filter out a single voice from the masses.

The answer is yes.

For simplicities sake, let’s assume this is a rather mundane cocktail party, with only two people. If we then place two microphones, each one closer to one of the two party ‘guests’, it could be modelled as so:

My rather beautiful Photoshop drawing to model the Cocktail problem with two people

As can be seen from the diagram above, there actually exists an algorithm that can take in the muffled, fuzzy and overlaid input from the microphones, and cluster it such that we can hear one recording of just the top person’s voice, and one with just the bottom, which I find pretty mind-blowing.

So to conclude, we should know be familiar with the whole notion of supervised and unsupervised learning, what are the characteristics of each of them, how they may overlap, and what distinguishes them.

Thanks for reading!

Like what you read? Give Vedant Mathur a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.