Create an AI-powered emotions detection app with Monaca, React and Framework 7

Pablo Mateos García
The Web Tub
Published in
6 min readSep 26, 2022
Create an AI-powered emotions detection app with Monaca, React and Framework 7
Create an AI-powered emotions detection app with Monaca, React and Framework 7

In this article, we will learn how to create an AI-powered emotions detection hybrid application using Monaca, React, Framework 7, and Tensorflow.

Used technologies

Before we start creating the project, it is good to understand what technologies we will be using in this tutorial.

Monaca is a collection of software tools and services for building and deploying HTML5 mobile hybrid apps. It is built using Apache Cordova and offers different resources like Cloud IDE, local development tools, a debugger, remote build, and deployment support.

Apache Cordova is an open-source hybrid mobile app development environment. It allows software developers to build apps for phones using HTML5, CSS and JavaScript. Besides, it can be extended with native plug-ins, allowing developers to add more functionalities.

React (also known as React.js or ReactJS) is a free open-source JavaScript library designed to create user interfaces to make single-page app development easier. It is maintained by Facebook and involves more than one thousand developers.

Framework7 is a free open-source framework to develop mobile, desktop or web apps. Besides, it is also possible to use it as a prototyping tool. It allows the user to create apps with HTML, CSS, and JavaScript easily and clearly.

Tensorflow.js is a library for machine learning in JavaScript. It provides many pre-trained models that are ready to use, but also gives the user an option to train and build their model directly in JavaScript. The whole list of pre-trained models can be found here.

Prerequisites

This is what we need to have to follow this tutorial:

Are you ready? Let’s start!

Creating the project

Firstly, we need to create the project in Monaca. This can be done either locally or in the cloud IDE. We decided to use the template Framework7 v6 React Blank to create the project.

Locally

*If you don’t have Monaca installed, run:

npm install -g monaca

Before starting, we have to log in using the following command:

monaca login

After logging in, we can create the project by running:

monaca create emotions_detection_app

After running it, we should choose the following options:

Create a new Monaca project locally
Create a new Monaca project locally

Cloud IDE

After clicking on Create New Project, we will choose the options shown in the following screenshot:

Create a new Monaca project using Cloud IDE
Create a new Monaca project using Cloud IDE

Project Type: Framework Templates

Framework: Onsen UI and React

Template: Framework7 React Blank

Firstly, we need to install one dependencies and 3 Cordova plugins:

Install plugins and dependency

Then add these 3 plugins to “package.json”

package.json

Another way of installing the Cordova plugins is by doing it in the Monaca Cloud IDE.

Import Cordova plugins using Monaca Cloud IDE
Import Cordova plugins using Monaca Cloud IDE

After installing dependencies and plugins, the “package.json” should look like something as below.

package.json

The version might be different when you install.

Installing on Monaca Cloud IDE

Click on the blue plus button and select New Terminal. You can type the above commands into the terminal.

Installing on Monaca Cloud IDE
Installing on Monaca Cloud IDE

Now we need to import face-api models, which are pre-trained for recognizing face characteristics (age, gender, …). Download models from here and put them in a new directory: “./public/models”.
To make sure that the models are transferred into the output directory after build, add publicDir: ’./public’ to the exported object in “./vite.config.js”.

Vite Configuration

Go to “./js” and create a file named “hybridFunctions.js”. Start with three helper functions to detect the type of device:

Helper Functions

Pages

The app will be divided into these pages:

Home Page

The main page of the app. Here is where the recognition will take place. It will work both in the browser and the mobile phone.

Home Page
Home Page

Here is the code to render the layout

home.jsx — layout

The setting up and prediction code will be common for both of them. Here is setting up:

Home Page’s Set-up

The function init will be launched first and will call the loadModel function, which as its name says, will load the model. The startVideo function will start predicting differently depending on the device (mobile or not).

And here prediction:

Home Page’s Prediction

On Line 3, the “start” function toggle the camera and the button caption. The “predictEmotion” function, on line 17, is started after camera is capturing as the interval (every 5 seconds). The “predicting” function is invoked which we use “faceApi” module to predict the emotion from the camera stream. The library return several detected emotions (happy, angry, surprised, …) with each confidence value (probability score). We then filter and return the emotion with the high score in “getDetectedEmotion” function. Finally, we display the result on the page in the “setEmotionResult” function. If the is some prediction error or it could not detect the face, we increase the error count and stop the prediction in the “useEffect” hook defined earlier.

Phone functions

To make the camera recognise emotions on the phone, we will be using the Cordova Canvas Camera plugin. A workaround must be implemented to get the stream of pictures. After the Canvas Camera is started, the stream is sent to “readImageFile” function, which reads the storage of the phone where the images are being temporarily saved, and therefore we can access them later in code.

Home Page’s Phone Functions

Please note that android and iPhone have different file paths and protocols.

Browser functions

For the browser, we will be using navigator.mediaDevices.getUserMedia.

Home Page’s Browser Functions

Settings Page

This page will allow the user to customize the time interval between camera predictions, allow vibrations, set the FPS of the camera and set between front/back camera. The settings are saved in localStorage after the user clicks on the Save button and then are retrieved on the Home page.

Settings Page
Settings Page
saveSettings function

To style the app, feel free to copy styles from “./css/app.css”. After those steps, the app should be ready for testing.

Run the application

Browser

If you were developing locally, you can run “npm run dev” to preview the app on the browser. The url should be htpp://localhost:8080.

Mobile

To access it on a mobile phone, we first need to deploy it with Monaca.

If you were developing locally, run the “monaca upload” command to upload it to Monaca Cloud IDE and open the project.

From now on, the process will look differently for Android and iOS.

Android
You will need to build the application and install on the phone to test it. You will use Monaca Cloud to build the application and use Monaca Debugger to test and debug purpose.
After open project in Monaca Cloud, go to Build -> Build App For Android -> Build for Debugging -> click on Custom Build Debugger and Start Build. After this process, you can download and install your app and test it on your phone.

Android — Custom Build Debugger

iOS

The step is the same for iOS but Apple require you to have a certificate.

iOS — Custom Build Debugger

You can import this app right away to your Monaca project with this link. You will need “Pro Plan” account for this project as it uses the custom Cordova plugins.

Conclusion

In this article, we could learn how to create an AI-powered emotions detection application using Monaca, React and Framework 7, as well as Tensorflow. You can find the code here.

I hope that you could understand every step easily and please, contact me if there is any question or comment.

Good luck and enjoy coding :)

References

--

--