Create a simple Sign Language Recognition App using Teachable Machine, Monaca, Vue.js and Framework7

Pablo Mateos García
The Web Tub
Published in
6 min readMar 31, 2022
Create a simple Sign Language Recognition App using Teachable Machine, Monaca, Vue.js, Cordova and Framework7
Create a simple Sign Language Recognition App using Teachable Machine, Monaca, Vue.js, Cordova and Framework7

Have you ever wanted to develop your own AI app? In this article, we will learn how to develop a simple AI application for recognising some Sign Language gestures, using Teachable Machine, Monaca, Vue.js and Cordova.

Japanese Sign Language App designed in Figma

There are more than 300 different sign languages in the world, so you can choose the one that you like the most. In my case, I decided to use the Japanese Sign Language(JSL) for this app, because currently there are not many recognition apps for JSL.

Japanese Sign Language chart
Japanese Sign Language chart by Dr Koto / CC BY 3.0

Used technologies

Teachable Machine is a web-based tool that allows the user to create machine learning models fast and easily, making the whole process simple for everyone, even beginners. It is made by Google.

Monaca is a collection of software tools and services for building and deploying HTML5 mobile hybrid apps. It is built using Apache Cordova and offers different resources like Cloud IDE, local development tools, a debugger, and also backend support.

Apache Cordova is an open-source hybrid mobile app development environment. It allows software developers to build apps for phones using HTML5, CSS and JavaScript. Besides, it can be extended with native plug-ins, allowing developers to add more functionalities.

Vue.js (mainly known as Vue; being pronounced /vjuː/, like ‘view’​) is an open-source JavaScript framework that allows developers to build user interfaces and single-page apps. It has comprehensible APIs and is growing at the same speed as React.

Framework7 is a free open-source framework to develop mobile, desktop or web apps. Besides, it is also possible to use it as a prototyping tool. It allows the user to create apps with HTML, CSS, and JavaScript easily and clearly.

Prerequisites

This is what we need to have to follow this tutorial:

Is everything set up? Let’s start!

Creating the model in Teachable Machine

Firstly, we will create the model using Teachable Machine. On their website, we should click on the ‘Get Started’ button. It is possible to import a previously saved project or create a new one. The available project types until now are Image, Audio and Pose. In our case, we will choose ‘Image Project’, as we will be analysing the images that the camera is capturing.

Create a new project in Teachable Machine
Create a new project in Teachable Machine

Now, we can either choose to create a standard or embedded image model. For our app, we will work with the ‘Standard image model’.

Choose image model
Choose image model

Now it’s the moment to start adding image samples to our model. We should take images for every class we want the app to identify. I took samples for the five Japanese vowels あ (a), い (i), う (u), え (e) and お (o). Besides, I created one last class called ‘何も’ (nothing), where I added some images of gestures that don’t belong to any of the other classes.

Define model classes and add image samples
Define model classes and add image samples

If you want to improve the model performance, I recommend that you add samples from different people, so then it will work with more users in the future and the efficiency will be better. In my case, I added around 1400 image samples per class, taken from 6 different people, including a Japanese Sign Language expert.

You can train the model anytime you want by clicking on the ‘Train Model’ button. It will take a few minutes, depending on the number of image samples you added, to train. After it finishes, you can preview and test the predictions before exporting the model.

Testing the model
Testing the model

Once your model achieves the expected outcome, you can export it by clicking on the ‘Export Model’ button.

Now, let’s start with the code!

Creating the project in Monaca

Now that we have the model working, we need to create the project in Monaca. This can be done either locally or in the cloud IDE. We will be using the Framework7 Vue 3 Minimal template when creating the project.

Locally:

*If you don’t have Monaca installed, please run:

npm install -g monaca

Before starting, we have to log in using the following command:

monaca login

After logging in, we can create the project by running:

monaca create JSL

After running it, we should choose the following options:

Create a new Monaca project locally
Create a new Monaca project locally

Cloud IDE:

After clicking on Create New Project, we will choose the options shown in the following screenshot:

Create a new Monaca project using Cloud IDE
Create a new Monaca project using Cloud IDE

Project Type: Framework Templates

Framework: Vue

Template: Framework7 Vue3 Minimal

Components and js files

Components

In this app, we will be using the following components:

  • App: the main component of the app.
  • Chip: a component that manages the chip element of the app. This element will control the language of the app.

The last above-mentioned component uses vue-i18nin order to translate the desired texts from the app to the languages defined. In my case, I decided to translate the app to Japanese, English and Spanish.

Changing the language of the app

The data for the translation will be defined in the app.js file, which we will explain later.

Js files

This app will include the following js files:

  • App: it initialises the app, the i18n package and mounts it.
Creating the i18n and mounting the app
  • Routes: defines the routes of the app.
  • Settings: allows the user to change the value of the prediction interval of the camera. It uses Vue’s State Management with Reactivity API, to transfer information easily between Vue components using a public API.
Defining the State Management with Reactivity API

Pages

The app will be divided into these pages:

Home Page

Home Page
Home Page

The main page of the app. Here is where the recognition will take place. It will work both in the browser and the mobile phone. The setting up and calculating code will be common for both of them.

Setting everything up and performing calculations

Phone functions

To make the camera recognise on phone, we will be using the Cordova Canvas Camera plugin.

Phone functions

Browser functions

For the browser, we will be using the user’s media access.

Browser functions

Settings Page

Settings Page
Settings Page

This page will allow the user to customise the time interval between camera predictions. It will be accessing the settings.js’ public API and the chip component.

Changing the settings values

Conclusion

In this article, we could learn how to develop a simple AI application for recognising some Sign Language gestures, using Teachable Machine, Monaca, Vue.js and Cordova. You can find the code here.

I hope that you could understand every step easily and please, contact me if there is any question or comment.

If you liked Teachable Machine and want to try to develop another similar app, please check this post: Create Your First Machine Learning Mobile Application

If you are interested in learning more about Japanese Sign Language, I encourage you to visit this Youtube channel: DeafJapan TV

Good luck and enjoy coding :)

--

--