Pro Gesture Translator | Blogpost 1

Mini Project | Govt. College of Engineering Kannur

Group Members

  1. Adarsh J
  2. Bibin Jaimon
  3. Nanda V
  4. Sreekanth KC
  5. Vishnupriya Ramesh

Introduction

We need to create a bridge between deaf and normal peoples in the world. American Sign Language(ASL) is the gesture language that we are converting to text and speech. We need to collect Data set for the testing and training process for our Model. We are using Neural Network model to detect the gesture into corresponding character.

Reference Paper :

Anup Kumar, Karun Thankachan, Mevin M. Dominic, Sign Langua-
ge Recognition , IEEE 3 rd International Conference on Recent Adva-
nces in Information Technology(RAIT),422–428,2016.

GUI

Main System View

Tools and Dependencies

  1. Anaconda
  2. Spyder
  3. Python
  4. Numpy
  5. Keras
  6. OpenCV

Data Set Collection

The first step is to collect the Data set. We need to collect the data set for testing and training process. Here, we using OpenCV for image processing.The above code is to get the processed image by openCV. You can get a image that shown below in the roi variable. You can save it in your own data path.

We need data set without noises and background as plain. We need to collect 2000 image for each alphabets for training process and 600 images for testing process.