Realtime Recognition of American Sign Language Alphabet

Aybüke Yalçıner
2 min readJun 20, 2019

--

Group Members: Aybüke Yalçıner, Enes Furkan Çiğdem, Furkan Çağlayan

Sign language is one of the methods in non-verbal communication and usually used by deaf people who can not speak. Although knowing the sign languages is very important for communication, many people do not know so this situation makes the lives of deaf people harder. So in this task, we will implement a real-time system which recognizes the American sign language alphabets.

The dataset consists of 1560 Train, 360 Validation, and 360 Test images and images are taken from 2 signers. The dataset contains 24 letters because 2 letter has motion.

Firstly, we train our model which will predict the class of the letters. We prefer to use the VGG-16 model whose fully connected layer is modified.

The model is pretrained model on the imageNet dataset. After we modify the fully connected layer of the model, we retrain several numbers of layers. We achieve to get more than 90% accuracy for the test set.

To take the frames, we firstly activate the camera of the device. To do that, we use the OpenCV library of the Python.

The above code activates the camera of the device and it takes several frames from the camera. After it takes a frame, it immediately sends it to our model. In addition, the above code is used to build a user interface. The user interfaces as below:

The user interface consists of several windows. The window, that is on the top-right corner, shows the segmented hand whose several versions exist such as:binary form, gradient magnitudes and no processing form(just applied blur).

When the user presses the “c” in the keyboard, the segmented form is shown; when ‘b’ is pressed, the binary form is shown and when “h” is pressed, the gradient magnitudes are shown on the top-right corner of the screen.

On the bottom-right corner of the screen, there exists a slider that helps the specify lower and upper thresholds according to ambient light.

The last part of the task is calling the classes from the main. The main.py as follows:

The demo of the system as follows:

To get more information, you can visit the github : https://github.com/Enescigdem/SignLanguageRecognizer

--

--