# Support Vector Machine(SVM) in Machine Learning

# What is Support Vector Machine?

“**Support Vector Machine**” (SVM) is a supervised **machine learning algorithm** which can be used for both classification or regression problems. However, it is mostly used in classification problems. In the SVM algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the **hyper-plane** that differentiates the two classes very well.

**Support Vectors:**

Support vectors are the data points, which are closest to the hyperplane. These points will define the separating line better by calculating **margins**. These points are more relevant to the construction of the classifier.

**Hyperplane:**

A **hyperplane** is a decision plane which separates between a set of objects having different class memberships.

**Margin:**

A **margin** is a gap between the two lines on the closest class points. This is calculated as the perpendicular distance from the line to support vectors or closest points. If the margin is larger in between the classes, then it is considered a good margin, a smaller margin is a bad margin.

# How does SVM work?

The main objective is to **segregate** the given dataset in the best possible way. The distance between the either nearest points is known as the margin. The objective is to select a hyperplane with the maximum possible margin between support vectors in the given dataset. SVM searches for the maximum **marginal hyperplane** in the following steps:

- Generate hyperplanes which segregates the classes in the best way.
**Left-hand side figure**showing**three hyperplanes black, blue and orange**. Here, the blue and orange have higher classification error, but the black is separating the two classes correctly. - Select the
**right hyperplane**with the**maximum segregation**from the either nearest data points as shown in the**right-hand side figure**.

## Dealing with non-linear and inseparable planes:

Some problems can’t be solved using **linear hyperplane**, as shown in the figure below **(left-hand side)**.

In such situation, **SVM** uses a **kernel **trick to transform the input space to a higher dimensional space as shown on the right. The data points are plotted on the x-axis and z-axis (Z is the squared sum of both x and y: z=x²=y²). Now we can easily segregate these points using linear separation.

Let’s Code:

- Import the Libraries and dataset:

2. Splitting the dataset into train and test set:

3. Applying Feature Scaling on train set:

4. Training the SVM model on the Training set:

5. Predicting the test set results:

6. Making the Confusion Matrix:

7. Visualizing the train set result using Matplotlib:

8. Visualizing the Test set Results Using Matplotlib:

The source code of this is available on my **github repository**.

**Closing Note: **I hope this blog will help you in learning **Support Vector Machine** **(SVM)** in a better way.

For any queries related to my project contact me over my email id: **powerakash8@gmail.com**