Kernel SVM: Machine Learning

TC. Lin
3 min readJan 22, 2023

This article continues from the previous: Support Vector Machine.

Welcome back! Remember the articles on the topic of SVM? If not, be sure to check them out:

  1. Support Vector Regression
  2. Support Vector Machine

As what we have talked about, SVM uses the idea of support vectors to produce the decision boundaries in order to classify data into groups.

Image from: https://www.theclickreader.com/support-vector-regression/

However, have you ever wondered what if the spread of data on the scatter plot is not that obvious, so called ‘not linear separable’, how do we go about dealing with it?

Image from TechTalks

For graph B, the SVM finds a way to figure out the ‘separation boundary’ within the entire data by using a ‘Kernel’.

Imagine graph B as the top view of 3D data, and the decision boundary that we used to circle and distinguish data of different groups can be known as ‘Kernel’.

By mentioning ‘3D’, imagine we somehow figure out a way to lift our data up like the following:

To do so, we apply a formula known as The Gaussian RBF Kernel, the formula looks like:

Hey! Don’t be intimidated! Just keep in mind that this formula is applied to the data points, and we are able to obtain the data in a 3D shape as shown above.

After doing so, we can then draw a circle around ‘the mountain’ like so:

We then push the mountain back flat, then end up with:

Image from TechTalks

Easy enough? That is pretty much about it!

Of course, if you are interested in how everything works behind the hood, feel free to search it up.

A takeaway message is that, SVM makes use of this ‘Kernel’ idea to figure out a way to classify data into different classes. Keep in mind that there are many different types of Kernel, but the goal is the same, to figure out a way to classify the data. One of the most common kernels is the way that we have talked about: The Gaussian RBF Kernel.

If you are interested:

Image from: https://data-flair.training/

With this power, we are able to not just draw a ‘linear line’ across our data points, but more like a ‘curvy line or circle’, thus, making our predictions more accurate.

from sklearn.svm import SVC

estimator = SVC(kernel='rbf') # argument 'kernel' is the place we specifiy the kernel that we want to use

That is it on SVM! Well, the maths behind it might not as simple to understand, but understanding how it works, it provides us with the power to more easily understand the complex part, if you are interested!

Photo by Belinda Fewings on Unsplash

> Continue reading: Naive Bayes

--

--

TC. Lin

Predicting the future isn't magic, it's artificial intelligence.