Understanding Support Vector Machines

Knoldus Inc.
Knoldus - Technical Insights
3 min readAug 19, 2016

[Contributed by Raghu from Knoldus, Canada]

One of the important and popular classification techniques among Machine Learning algorithms is Support Vector Machines. This is also called large margin classification. Support Vector Machine technique results in a hyperplane that separates and hence classifies samples into two distinct classes. SVM results in such a plane that not only separates samples but does it with maximum separation possible. Thus the name large margin classifier. A 2-dimensional depiction of this is shown in the picture below. This is the case of a linear SVM where the decision boundary that separates the classes is linear.

Screenshot from 2016-08-18 22-34-29

Support Vector Machines also support classification where the decision boundary is non-linear. In this case, SVM uses a Kernel. Most popular kernel that is used for non-linear decision problems is what is called an Radial Basis Function Kernel (RBF Kernel in short). This is also called a Gaussian Kernel. Below are 2 images that will depict the working of the SVM with Gaussian Kernel which does classification using non-linear decision boundary.

Screenshot from 2016-08-18 22-35-40.png

One of the easiest ways to build SVM is to use a SVM implementations available in many of the popular ML libraries for various languages. LIBSVM, Scikit-learn and Spark ML are all examples of SVM implementations that are available to use. In this article, we will demonstrate a simple way to build an SVM, train it and then use it using scikit-learn using Python.

The following listing shows a Python session

[code language=”python”]

Python 2.7.11 |Anaconda 4.0.0 (64-bit)| (default, Dec 6 2015, 18:08:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7–1)] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>>
>>>
>>>
>>>
>>> from sklearn import svm
>>> theSVM = svm.SVC()
>>> X = [[0,0], [1,1]]
>>> y = [0,1]
>>> theSVM.fit(X,y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma=’auto’, kernel=’rbf’,
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
>>> theSVM.predict([[.3,.3]])
array([0])
>>> theSVM.predict([[.6,.6]])
array([1])
>>>

[/code]

In the above Python session, we created a classifier that uses an SVM. As can see from the below output, the kind of kernel used is RBF. RBF kernel takes gamma as parameter. In this case, gamma is set automatically. We need to specify the value of C, which is another hyperparamter, which by default is set to 1.0.

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma=’auto’, kernel=’rbf’,
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)

We have inputs with 2 dimensions. In this case, we have 2 examples [0,0] and [1,1] and the values of y for these inputs is 0 and 1. In this case, the SVM will come up with a decision boundary that is a line with [0,0] and [1,1] on either side. And we can now use this SVM, by giving it an X and and SVM classifies it and prints out the output. It classifies [.3,.3] as 0 and [.6,.6] as 1.

Enjoy!

--

--

Knoldus Inc.
Knoldus - Technical Insights

Group of smart Engineers with a Product mindset who partner with your business to drive competitive advantage | www.knoldus.com