Python Machine Learning: Scikit-Learn Tutorial

Karlijn Willems
17 min readJan 5, 2017

--

Originally published at https://www.datacamp.com/community/tutorials/machine-learning-python

Machine learning studies the design of algorithms that can learn. The hope that this discipline brings with itself is that the inclusion of experience into its tasks will eventually improve the learning. However, this improvement needs to happen in such a way that the learning itself becomes automatic so that humans don’t need to interfere anymore is the ultimate goal.

You’ll probably have already heard that machine learning has close ties between this discipline and Knowledge Discovery, Data Mining, Artificial Intelligence (AI) and Statistics. Typical use cases of machine learning rnage from scientific knowledge discovery and more commercial ones: from the “Robot Scientist” to anti-spam filtering and recommender systems.

Or maybe, if you haven’t heard about this discipline, you’ll find it vaguely familiar as one of the 8 topics that you need to master if you want to excel in data science.

This scikit-learn tutorial will introduce you to the basics of Python machine learning: step-by-step, it will show you how to use Python and its libraries to explore your data with the help of matplotlib, work with the well-known algorithms KMeans and Support Vector Machines (SVM) to construct models, to fit the data to these models, to predict values and to validate the models that you have build.

Note that the code chunks have been left out for convenience. If you want to follow and practice with code, go here.

If you’re more interested in an R tutorial, check out our Machine Learning with R for Beginners tutorial

Loading Your Data

The first step to about anything in data science is loading in your data. This is also the starting point of this tutorial.

If you’re new to this and you want to start problems on your own, finding data sets might prove to be a challenge. However, you can typically find good data sets at the UCI Machine Learning Repository or on the Kaggle website. Also, check out this KD Nuggets list with resources.

For now, you just load in the digits dataset that comes with a Python library, called scikit-learn. No need to go and look for datasets yourself.

Fun fact: did you know the name originates from the fact that this library is a scientific toolbox built around SciPy? By the way, there is more than just one scikit out there. This scikit contains modules specifically for machine learning and data mining, which explains the second component of the library name. :)

To load in the data, you import the module datasets from sklearn. Then, you can use the load_digits() method from datasets to load in the data.

Note that the datasets module contains other methods to load and fetch popular reference datasets, and you can also count on this module in case you need artificial data generators. In addition, this data set is also available through the UCI Repository that was mentioned above: you can find the data here. You’ll load in this data with the help of the pandas library.

When you first start working with a dataset, it’s always a good idea to go through the data description and see what you can already learn. When it comes to scikit-learn, you don’t immediately have this information readily available, but in the case where you import data from another source, there's usually a data description present, which will already be a sufficient amount of information to gather some insights into your data.

However, these insights are not merely deep enough for the analysis that you are going to perform. You really need to have a good working knowledge about the data set.

Performing an exploratory data analysis (EDA) on a data set like the one that this tutorial now has might seem difficult.

You should start with gathering the basic information: you already have knowledge of things such as the target values and the description of your data. You can access the digits data through the attribute data. Similarly, you can also access the target values or labels through the target attribute and the description through the DESCR attribute.

To see which keys you have available to already get to know your data, you can just run digits.keys().

The next thing that you can (double)check is the type of your data.

If you used read_csv() to import the data, you would have had a data frame that contains just the data. There wouldn’t be any description component, but you would be able to resort to, for example, head() or tail() to inspect your data. In these cases, it’s always wise to read up on the data description folder!

However, this tutorial assumes that you make use of the library’s data and the type of the digits variable is not that straightforward if you’re not familiar with the library. Look at the print out in the first code chunk. You’ll see that digits actually contains numpy arrays!

This is already quite some important information. But how do you access these arays?

It’s very easy, actually: you use attributes to access the relevant arrays.

Remember that you have already seen which attributes are available when you printed digits.keys(). For instance, you have the data attribute to isolate the data, target to see the target values and the DESCR for the description, …

But what then?

The first thing that you should know of an array is its shape. That is, the number of dimensions and items that is contained within an array. The array’s shape is a tuple of integers that specify the sizes of each dimension.

Now let’s try to see what the shape is of these three arrays that you have distinguished (the data, target and DESCR arrays).

Use first the data attribute to isolate the numpy array from the digits data and then use the shape attribute to find out more. You can do the same for the target and DESCR. There’s also the images attribute, which is basically the data in images.

To recap: by inspecting digits.data, you see that there are 1797 samples and that there are 64 features. Because you have 1797 samples, you also have 1797 target values.

But all those target values contain 10 unique values, namely, from 0 to 9. In other words, all 1797 target values are made up of numbers that lie between 0 and 9. This means that the digits that your model will need to recognize are numbers from 0 to 9.

Lastly, you see that the images data contains three dimensions: there are 1797 instances that are 8 by 8 pixels big.

Then, you can take your exploration up a notch by visualizing the images that you’ll be working with. You can use one of Python’s data visualization libraries, such as matplotlib:

On a more simple note, you can also visualize the target labels with an image:

Now you know a very good idea of the data that you’ll be working with!

But is there no other way to visualize the data?

As the digits data set contains 64 features, this might prove to be a challenging task. You can imagine that it’s very hard to understand the structure and keep the overview of the digits data. In such cases, it is said that you’re working with a high dimensional data set.

High dimensionality of data is a direct result of trying to describe the objects via a collection of features. Other examples of high dimensional data are, for example, financial data, climate data, neuroimaging, …

But, as you might have gathered already, this is not always easy. In some cases, high dimensionality can be problematic, as your algorithms will need to take into account too many features. In such cases, you speak of the curse of dimensionality. Because having a lot of dimensions can also mean that your data points are far away from virtually every other point, which makes the distances between the data points uninformative.

Dont’ worry, though, because the curse of dimensionality is not simply a matter of counting the number of features. There are also cases in which the effective dimensionality might be much smaller than the number of the features, such as in data sets where some features are irrelevant.

In addition, you can also understand that data with only two or three dimensions is easier to grasp and can also be visualized easily.

That all explains why you’re going to visualize the data with the help of one of the Dimensionality Reduction techniques, namely Principal Component Analysis (PCA). The idea in PCA is to find a linear combination of the two variables that contains most of the information. This new variable or “principal component” can replace the two original variables.

In short, it’s a linear transformation method that yields the directions (principal components) that maximize the variance of the data. Remember that the variance indicates how far a set of data points lie apart. If you want to know more, go to this page.

You can easily apply PCA do your data with the help of scikit-learn.

Tip: you have used the RandomizedPCA() here because it performs better when there’s a high number of dimensions. Try replacing the randomized PCA model or estimator object with a regular PCA model and see what the difference is.

Note how you explicitly tell the model to only keep two components. This is to make sure that you have two-dimensional data to plot. Also, note that you don’t pass the target class with the labels to the PCA transformation because you want to investigate if the PCA reveals the distribution of the different labels and if you can clearly separate the instances from each other.

You can now build a scatterplot to visualize the data:

Again you use matplotlib to visualize the data. It’s good for a quick visualization of what you’re working with, but you might have to consider something a little bit more fancy if you’re working on making this part of your data science portfolio.

Also note that the last call to show the plot (plt.show()) is not necessary if you’re working in Jupyter Notebook, as you’ll want to put the images inline. When in doubt, you can always check out our Definitive Guide to Jupyter Notebook.

Where To Go Now?

Now that you have even more information about your data and you have a visualization ready, it does seem a bit like the data points sort of group together, but you also see there is quite some overlap.

This might be interesting to investigate further.

Do you think that, in a case where you knew that there are 10 possible digits labels to assign to the data points, but you have no access to the labels, the observations would group or “cluster” together by some criterion in such a way that you could infer the lables?

Now this is a research question!

In general, when you have acquired a good understanding of your data, you have to decide on the use cases that would be relevant to your data set. In other words, you think about what your data set might teach you or what you think you can learn from your data.

From there on, you can think about what kind of algorithms you would be able to apply to your data set in order to get the results that you think you can obtain.

Tip: the more familiar you are with your data, the easier it will be to assess the use cases for your specific data set. The same also holds for finding the appropriate machine algorithm.

However, when you’re first getting started with scikit-learn, you’ll see that the amount of algorithms that the library contains is pretty vast and that you might still want additional help when you’re doing the assessment for your data set. That’s why this scikit-learn machine learning map will come in handy.

Note that this map does require you to have some knowledge about the algorithms that are included in the scikit-learn library. This, by the way, also holds some truth for taking this next step in your project: if you have no idea what is possible, it will be very hard to decide on what your use case will be for the data.

As your use case was one for clustering, you can follow the path on the map towards “KMeans”. You’ll see the use case that you have just thought about requires you to have more than 50 samples (“check!”), to have labeled data (“check!”), to know the number of categories that you want to predict (“check!”) and to have less than 10K samples (“check!”).

But what exactly is the K-Means algorithm?

It is one of the simplest and widely used unsupervised learning algorithms to solve clustering problems. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters that you have set before you run the algorithm. This number of clusters is called k and you select this number at random.

Then, the k-means algorithm will find the nearest cluster center for each data point and assign the data point closest to that cluster.

Once all data points have been assigned to clusters, the cluster centers will be recomputed. In other words, new cluster centers will emerge from the average of the values of the cluster data points. This process is repeated until most data points stick to the same cluster. The cluster membership should stabilize.

You can already see that, because the k-means algorithm works the way it does, the initial set of cluster centers that you give up can have a big effect on the clusters that are eventually found. You can, of course, deal with this effect, as you will see further on.

However, before you can go into making a model for your data, you should definitely take a look into preparing your data for this purpose.

As you have read in the previous section, before modeling your data, you’ll do well by preparing it first. This preparation step is called “preprocessing”.

The first thing that we’re going to do is preprocessing the data. You can standardize the digits data by, for example, making use of the scale() method. By scaling the data, you shift the distribution of each attribute to have a mean of zero and a standard deviation of one (unit variance).

In order to assess your model’s performance later, you will also need to divide the data set into two parts: a training set and a test set. The first is used to train the system, while the second is used to evaluate the learned or trained system.

In practice, the division of your data set into a test and a training sets is disjoint: the most common splitting choice is to take 2/3 of your original data set as the training set, while the 1/3 that remains will compose the test set.

You will try to do this also here. You see in the code chunk below that this ‘traditional’ splitting choice is respected: in the arguments of the train_test_split() method, you clearly see that the test_size is set to 0.25.

You’ll also note that the argument random_state has the value 42 assigned to it. With this argument, you can guarantee that your split will always be the same. That is particularly handy if you want reproducible results.

After you have split up your data set into train and test sets, you can quickly inspect the numbers before you go and model the data:

You’ll see that the training set X_train now contains 1347 samples, which is exactly 2/3d of the samples that the original data set contained, and 64 features, which hasn’t changed. The y_train training set also contains 2/3d of the labels of the original data set. This means that the test sets X_train and y_train contain 450 samples.

After all these preparation steps, you have made sure that all your known (training) data is stored. No actual model or learning was performed up until this moment.

Now, it’s finally time to find those clusters of your training set. Use KMeans() from the cluster module to set up your model. You’ll see that there are three arguments that are passed to this method: init, n_clusters and the random_state.

You might still remember this last argument from before when you split the data into training and test sets. This argument basically guaranteed that you got reproducible results.

The init indicates the method for initialization and even though it defaults to ‘k-means++’, you see it explicitly coming back in the code. That means that you can leave it out if you want. Try it out in the DataCamp Light chunk above!

Next, you also see that the n_clusters argument is set to 10. This number not only indicates the number of clusters or groups you want your data to form, but also the number of centroids to generate. Remember that a cluster centroid is the middle of a cluster.

Do you also still remember how the previous section described this as one of the possible disadvantages of the K-Means algorithm?

That is, that the initial set of cluster centers that you give up can have a big effect on the clusters that are eventually found?

Usually, you try to deal with this effect by trying several initial sets in multiple runs and by selecting the set of clusters with the minimum sum of the squared errors (SSE). In other words, you want to minimize the distance of each point in the cluster to the mean or centroid of that cluster.

By adding the n-init argument to KMeans(), you can determine how many different centroid configurations the algorithm will try.

Note again that you don’t want to insert the test labels when you fit the model to your data: these will be used to see if your model is good at predicting the actual classes of your instances!

You can also visualize the images that make up the cluster centers:

If you want to see another example that visualizes the data clusters and their centers, go here.

The next step is to predict the labels of the test set. You predict the values for the test set, which contains 450 samples. You store the result in y_pred. You also print out the first 100 instances of y_pred and y_test and you immediately see some results. In addition, you can study the shape of the cluster centers: you immediately see that there are 10 clusters with each 64 features.

But this doesn’t tell you much because we set the number of clusters to 10 and you already knew that there were 64 features.

Maybe a visualization would be more helpful:

Tip: run the code from above again, but use the PCA reduction method:

At first sight, the visualization doesn’t seem to indicate that the model works well.

This needs some further investigation.

And this need for further investigation brings you to the next essential step, which is the evaluation of your model’s performance. In other words, you want to analyze the degree of correctness of the model’s predictions.

You should look at the confusion matrix. Then, you should try to figure out something more about the quality of the clusters by applying different cluster quality metrics. That way, you can judge the goodness of fit of the cluster labels to the correct labels.

There are quite some metrics to consider:

  • The homogeneity score
  • The completeness score
  • The V-measure score
  • The adjusted Rand score
  • The Adjusted Mutual Info (AMI) score
  • The silhouette score

But also these scores aren’t fantastic.

Clearly, you should consider another estimator to predict the labels for the digits data.

When you recapped all of the information that you gathered out of the data exploration, you saw that you could build a model to predict which group a digit belongs to without you knowing the labels. And indeed, you just used the training data and not the target values to build your KMeans model.

Let’s assume that you depart from the case where you use both the digits training data and the corresponding target values to build your model.

If you follow the algorithm map, you’ll see that the first model that you meet is the linear SVC. Let’s apply this to our data.

You see here that you make use of X_train and y_train to fit the data to the SVC model. This is clearly different from clustering. Note also that in this example, you set the value of gamma manually. It is possible to automatically find good values for the parameters by using tools such as grid search and cross validation.

Even though this is not the focus of this tutorial, you will see how you could have gone about this if you would have made use of grid search to adjust your parameters.

For a walkthrough on how you should apply grid search, I refer you to the original tutorial.

You see that in the SVM classifier has a kernelargument that specifies the kernel type that you’re going to use in the algorithm. By default, this is rbf. In other cases, you can specify others such as linear, poly, …

But what is a kernel exactly?

A kernel is a similarity function, which is used to compute similarity between the training data points. When you provide a kernel to an algorithm, together with the training data and the labels, you will get a classifier, as is the case here. You will have trained a model that assigns new unseen objects into a particular category. For the SVM, you will typicall try to linearly divide your data points.

You can now visualize the images and their predicted labels. This plot is very similar to the plot that you made when you were exploring the data:

But now the biggest question: how does this model perform?

You clearly see that this model performs a whole lot better than the clustering model that you used earlier.

You can also see it when you visualize the predicted and the actual labels:

You’ll see that this visualization confirms your classification report, which is very good news. :)

What’s Next In Your Data Science Journey?

Congratulations, you have reached the end of this scikit-learn tutorial, which was meant to introduce you to Python machine learning! Now it’s your turn.

Start your own digit recognition project with different data. One dataset that you can already use is the MNIST data, which you can download here.

The steps that you will need to take are very similar to the ones that you have gone through with this tutorial, but if you still feel that you can use some help, you should check out this page, which works with the MNIST data and applies the KMeans algorithm.

Working with the digits dataset was the first step in classifying characters with scikit-learn. If you’re done with this, you might consider trying out an even more challenging problem, namely, classifying alphanumeric characters in natural images.

A well-known dataset that you can use for this problem is the Chars74K dataset, which contains more than 74,000 images of digits from 0 to 9 and the both lowercase and higher case letters of the English alphabet. You can download the dataset here.

Whether you’re going to start with the projects that have been mentioned above or not, this is definitely not the end of your journey of data science with Python. If you choose not to widen your view just yet, consider deepening your data visualization and data manipulation knowledge: don’t miss out on DataCamp’s Interactive Data Visualization with Bokeh course to make sure you can impress your peers with a stunning data science portfolio or DataCamp’s pandas Foundation course, to learn more about working with data frames in Python.

Originally published at www.datacamp.com.

--

--