Support Vector Machine — The Math

Priyansh Soni
2 min readFeb 28, 2022

--

iPhone 20

This post is a filler for the below article:

Here, we discuss the math behind the master-algo SVM.
Let’s bungee!

We were discussing the loss function of SVM, and for that we need the math that runs under-the-hood for the model. I’ll start from the very beginning.

Disclaimer : Please refer the above article before going through this one. Dun complain then if it’s goes bouncer.

OUTLINE —

  • Equation of Hyperplanes
  • Distance between the Support Vectors
  • Loss function of SVM
  • Regularisation and Hinge Loss
  • Cost function for SVM
  • Hard Margin Cost function
  • Soft Margin Cost function

1. Equation of Hyperplanes

The hyperplanes in SVM looks like these :

Just a quick look

There is a positive one and a negative one. So now, our aim is to maximise the margin of the hyperplane. For this we need some units we can tweak(maximise/minimise) in order for the hyperplane to increase in margin.

So let’s assume that the blue support vector closest to the positive hyperplane is called A. Now, the minimum distance between an observation A and the maximum margin hyperplane can be measured along a line that is orthogonal to the plane and goes through A. We call this orthogonal line w.

--

--

Priyansh Soni

Bet my articles are the best easy explanation you could get anywhere. I am a product enthusiast nurturing technology to transform change.