Signsage: Sign Language to Text Translator website for Inclusive Education
Signsage is a Python based program designed to minimize the communication barrier between the deaf community and the hearing impaired in educational classes. It uses sign language recognition in real-time to provide text translations for the hand gestures used offering a natural interface for the students and the teachers.
for code click here
Key Features:
- Real-time Sign Language Recognition: Uses computer vision to decode hand movements from videos that are captured from a webcam.
- Text Translation: Translates accepted signals into text that is readable through the internet, usually in graphical user interface form.
- Inclusive Communication: Also helps the teachers to be able to communicate easily with the deaf students making the learning environment even more effective.
- Customizable Sign Language Set: It enables instructors to customize only specific signals within the system, which are relevant to the learning standards and ultimately improves comprehensiveness.
Requirements
- Python 3.x
- OpenCV (
cv2
) - NumPy
- Mediapipe
- Scikit-learn
- Flask
Words available for interpretation
Usage
- Clone the repository.
- Make sure you have Python 3.x installed on your system along with the required libraries.
python -m pip install -r requirements.txt
3. Go into the root folder of the project and run the app by using the following command.
python app.py
4. Go to the development server at: http://localhost:5000
Navigating through the project
scripts/one_collect_imgs.py
This script allows you to collect real-time image data from your webcam with specified labels. It creates a dataset for each label by capturing images and storing them in separate directories within a specified data directory.
Usage
- Run the script.
- Enter the labels you want to create when prompted. Enter
-1
to stop adding labels. - Once labels are entered, the webcam will activate.
- Press
Q
to start capturing images for each label. - Images will be stored in the specified data directory under separate folders for each label.
Parameters
DATA_DIR
: Directory to store the collected data. Default is./data
.dataset_size
: Number of images to collect for each label. Default is300
.
Notes
- Ensure proper lighting and background for accurate image collection.
- Press
Q
to start capturing images after each label prompt.
scripts/two_create_dataset.py
This script captures images from a specified directory, detects hand landmarks using the MediaPipe library, and saves the landmark data along with corresponding labels into a pickle file.
Usage
- Place your image data in the specified data directory (
./data
by default). - Run the script.
- The script will process each image, extract hand landmarks, and save the data along with labels into a pickle file named
data.pickle
.
Parameters
DATA_DIR
: Directory containing the image data. Default is./data
.
Notes
- Ensure your images have sufficient resolution and quality for accurate hand landmark detection.
- The script assumes that each subdirectory in the data directory represents a different label/class.
- Hand landmark data is saved as a list of coordinates relative to the top-left corner of the bounding box of the detected hand.
- The pickle file
data.pickle
contains a dictionary with keys 'data' and 'labels', where 'data' is a list of hand landmark data and 'labels' is a list of corresponding labels.
scripts/three_train_classifier.py
This script trains a Random Forest classifier for gesture recognition using hand landmarks data. It also evaluates the model’s performance using cross-validation and saves the trained model for future use.
Usage
- Ensure you have hand landmarks data saved as
data.pickle
in the project directory. - Run the script.
- The script will load the hand landmarks data, preprocess it, train a Random Forest classifier, and evaluate its performance.
Notes
- Hand landmarks data should be saved as a dictionary (
labels_dict.py
)containing 'data' (list of hand landmark data) and 'labels' (list of corresponding labels). - The script pads each hand landmark sequence with zeros to ensure all sequences have the same length, necessary for training the classifier.
- The classifier is trained using stratified train-test split and evaluated using cross-validation for robustness.
- The trained model is saved as
model.p
using thepickle
module for future use. - Adjust the model parameters and preprocessing steps as needed for improved performance.
scripts/four_inference_classifier.py
This script performs real-time gesture recognition using hand landmarks detected by the MediaPipe library. It loads a pre-trained gesture classification model and overlays the predicted gesture label on the input video stream.
Usage
- Ensure you have a trained gesture classification model saved as
model.p
in the project directory. - Run the script.
- The script will activate your webcam and overlay the predicted gesture label on the detected hand landmarks in real-time.
Notes
- The gesture classification model is assumed to be trained externally and saved using the
pickle
module. - Hand landmarks are detected using the MediaPipe library, providing a robust representation of hand gestures.
- The script draws bounding boxes around detected hands and overlays the predicted gesture label on the video stream.
- Adjust the
min_detection_confidence
parameter of theHands
class for controlling the confidence threshold of hand landmark detection. - Ensure proper lighting and background for accurate hand landmark detection and gesture recognition.
app.py
This Flask-based web application streams real-time video from your webcam and performs gesture recognition using a pre-trained model. The predicted gesture labels are overlaid on the video stream and displayed on a web page.
Usage
- Ensure you have your pre-trained gesture classification model saved and inference code ready.
- Run the Flask application (
app.py
). - Open your web browser and navigate to
http://localhost:5000
orhttp://127.0.0.1:5000
. - You should see the real-time video stream with predicted gesture labels overlaid.
Notes
- The
GestureClassifier
class is assumed to be implemented ininference_classifier.py
. - The Flask application captures frames from the webcam using OpenCV, performs gesture recognition using the
GestureClassifier
class, and streams the processed frames to the web page. - Ensure proper permissions for accessing the webcam.
- Adjust the URL (
http://localhost:5000
) according to your Flask application settings.