Can we do without labeled data? (Un)supervised ML

Lorenzo Melchior
dida Machine Learning
5 min readApr 14, 2020
Here labeling definitely makes sense.

It seems to be a common mistake to believe that machine learning is usually an unsupervised task: you have data (without pre-existing labels) that you train e.g. a neural network on for tasks like classification or image segmentation. The truth is that most models in machine learning are supervised, that is, they rely on labeled training data.

But labeling often takes a lot of time and can be very tedious.

In this blog post I want to find out if I am able to perform the same classification task once with labels, once without.

For this task I will use the famous MNIST data set, which contains 60,000 training and 10,000 validation images of handwritten digits, all of them labeled. Every image consists of 28x28 greyscale pixels and contains only one digit, located in the center of the image. To make things easier, I use the CSV version of the data set.

Using unsupervised learning for classification

If you want to train a model to classify data, you usually use labels that indicate for each sample of data to which class it belongs, so that you can learn the typical patterns for each class. Now, if we want to train a classifier without having any labels, we must first find a way to group the training data points into these classes. For this I will use a clustering algorithm. It will group all the training data we have into ten clusters based on the similarity of the data found. One would expect these clusters to correspond to the “real” classes corresponding to the digits 0–9.

I will use the k-means algorithm which is very simple and intuitive. It will just compare the Euclidean distance between all the data points in the [28x28 pixel]-space (this is just the generalization of how we measure distances in three-dimensional space) and group them in ten clusters according to their locations. The intuition behind the algorithm is that if two data points are close to each other in terms of Euclidean distance, then they probably belong to the same class.

First we load some dependencies:

Then we define the train and validation sets:

Next we set up and train the clustering algorithm:

We now have found those ten clusters. Let’s compute the closest centroids for the images in the validation set:

Finally we can have a look at examples from all ten clusters: We see nine random samples for each cluster, and on the right of them the computed cluster centroid.

Inspecting the clusters, we see that they roughly correspond to the classes 6, 1, 8, 9, 3, 0, 1, 2, 9, 0. So, for example, instead of clustering digits belonging to the class 5, the algorithm rather distinguishes between two different kinds of 0’s (thin round and thick crooked ones, it seems).

To analyse more rigorously which class fits best on which cluster, I will just try every of the 10! (= 3,628,800) possible permutations of class allocations. Note that I will validate the performance with the validation set and its labels. If you would really train this completely without labels you would try to find another way to see which cluster belongs to which digit.

We get the following best permutation:

Best permutation: (5, 1, 8, 4, 2, 3, 7, 6, 9, 0) Accuracy: 0.5173

Obviously a model that only gets every second guess correct, is not very powerful. Of course there are ways to improve our results. I have chosen k-means because of its simplicity. Using more sophisticated algorithms for finding the centroids like expectation maximization or DBSCAN could lead to better results.

Now, let’s see whether we can achieve better results by using the labels. For the supervised learning approach I will use the k-nearest neighbours algorithm. It is quite similar to k-means, but instead of assigning a class via the closest centroid, it finds the k nearest labeled samples in our 28x28 pixel image space (again using Euclidean distance). Based on the labels of the k nearest neighbours we decide which class a given test sample belongs to. Now that we are looking for very few but very similar images, the results should be much better than the comparison with the average images of each cluster.

Note: The parameter k determines how many training samples we use to decide to which class a test sample belongs. I would recommend using a k like 1, 3 or 5.

This outputs 0.9705.

With only 4 lines of code we we’re able to generate and validate the model and get an accuracy of 97.05%.

How much data do we actually need?

Since not labeling the data didn’t really work out, we will see if we can just reduce the amount of data, so that we don’t need to spend so much time labeling.

For this I’m going to use the very powerful and easy-to-use XGBoost classifier. I will train it with different amounts of data to find out how the amount of training data affects the performance.

At the same time I will assuring that the classifier gets enough time to learn as much as it can with the data in each turn: I will let the classifier run until it didn’t improve for 100 epochs.

If you like, have a look at the code, or skip it to check out the results.

This is what we get:

As we can see, getting good results over 80% can already be achieved with less than 1% of the maximal amount of training images and over 90% with just 10%. Still, the more images, the higher the obtained accuracy.

Conclusion

We showed that you can sometimes emulate supervised learning with unsupervised and just rely on unlabeled data. However, the results were much worse than with labeled data.

When it comes to the amount of training data we have seen that for classifying handwritten digits you don’t need 60,000 images. Above a certain sample size there are probably better things to do with your time than labeling, e.g. hyperparameter tuning. But having more data is always good and the number of images you need heavily relies on the problem.

Originally published at https://dida.do on April 14, 2020.

--

--