Member-only story
Statistical Learning in Julia. Part 3 — Support Vector Machine
Chapters
- Part 1: Statistical Learning in Julia. Part 1 — Linear Regression
- Part 2: Statistical Learning in Julia. Part 2 — K-Nearest Neighbors
Here we are with part 3! Take a sit, open your IDE, gather your cognitive resources and code along. Today, we are dealing again with a classification problem exploiting a powerful algorithm such as the Support Vector Machine. Moreover, this algorithm gives us the opportunity to introduce the kernel trick.
Introduction
To be precise, there are three different classifiers people refer to using the term Support Vector Machine (SVM), each built on top of the previous one: Maximal Margin Classifier, Support Vector Classifier and, finally, the Support Vector Machine.
The Maximal Margin Classifier
This algorithm tries to find the best hyperplane that separates our classes.
We can define a hyperplane in a p-dimensional space as a “flat affine subspace of dimensions p-1” [1]. For instance, in a two-dimensional space, an hyperplane is defined by the equation
As you can see, the hyperplane in a two-dimensional space is just a straight line. We can intuitively understand that…
