# Hand Gesture Classification Using Python

**AIM**

The goal of this project is to train a Machine Learning algorithm capable of classifying images of different hand gestures, such as a fist, palm, showing the thumb, and others. This classification can be useful for Gesture Navigation, for example.

**DATASET**

Hand gesture recognition database is presented, composed by a set of near infrared images acquired by the Leap Motion sensor. The database is composed by 10 different hand-gestures (showed above) that were performed by 10 different subjects (5 men and 5 women).

# IMPORT PACKAGES

Firstly, we have to import a few python packages which will be needed to work with images and arrays.

# LOAD DATA

With the above dataset at hand, we now start preparing the images to train the models. We have to load all the images into an array that we will call **X** and all the labels into another array called **y**. The array **Z **contains the images as it is in the dataset while the array** X **contains the binary image of the images present in** Z.**

now that we have converted all the pixels into corresponding numbers, all our images are in a multidimensional arrays so we have to flatten the arrays to proceed further. Numpy package helps us with a function called *flatten().*

# Principal Component Analysis and Pre-Processing

**Principal Component Analysis** (**PCA**) is **used** to explain the variance-covariance structure of a set of variables through linear combinations. It is often **used** as a dimensionality-reduction technique. We use this technique and reduce the number of dimensions that are present in our data.

We reduce the number of dimensions to 20 which leads to,

We now Normalize the data to make sure different features take on similar range of values, For this purpose we use *StandarScaler().*

Now that we have the training and testing data which has been normalized we can start training different models to classify the hand gestures.

Stochastic Gradient Descent

Here we use the ‘LOG’ loss function as a parameter

Decision Tree Classifier

The maximum depth of the decision tree is set as 10 in the parameter.

Random Forest

The number of trees has been set as 100 and the depth of each tree has been set to 15 in the parameters.

Logistic Regression

Naive Bayes

We are using the Gaussian Naive Bayes algorithm. Other types include multinomial naive bayes etc.

Gradient Descent Classifier

# RESULTS

Stochastic Gradient Descent : 70.3%

Decision Tree : 95%

Random Forest : 99.925%

Logistic Regression : 72.2%

Gaussian Naive Bayes : 65.6%

Gradient Descent : 23.6%

# CONCLUSION

Based on the results presented above, we can conclude that one of the classifiers is able to accurately classify the gestures with an accuracy of 99.925% based on a Random Forest Classifier algorithm.

The Accuracy of the model is based on many aspects in our dataset and also the features present in the training data. The dataset was created without any moise i.e, the gestures presented are reasonably distinct, the images are clear and without background. Also there were enough number of samples which made our model robust.

The drawback is that for different problems, we would probably need more data to update the parameters of our model into a better direction. Because of the chaos and noise in the real world scenario we need more noisy data that resembles the real world.

For the full notebook checkout my Github repository : D

# CITATION

T. Mantecón, C.R. del Blanco, F. Jaureguizar, N. García, “Hand Gesture Recognition using Infrared Imagery Provided by Leap Motion Controller”, Int. Conf. on Advanced Concepts for Intelligent Vision Systems, ACIVS 2016, Lecce, Italy, pp. 47–57, 24–27 Oct. 2016. (doi: 10.1007/978–3–319–48680–2_5)