Create an AI-powered emotions detection app with Monaca, React and Framework 7
In this article, we will learn how to create an AI-powered emotions detection hybrid application using Monaca, React, Framework 7, and Tensorflow.
Before we start creating the project, it is good to understand what technologies we will be using in this tutorial.
Monaca is a collection of software tools and services for building and deploying HTML5 mobile hybrid apps. It is built using Apache Cordova and offers different resources like Cloud IDE, local development tools, a debugger, remote build, and deployment support.
This is what we need to have to follow this tutorial:
- Monaca account
- Node.js (only if you want to code locally)
Are you ready? Let’s start!
Creating the project
Firstly, we need to create the project in Monaca. This can be done either locally or in the cloud IDE. We decided to use the template Framework7 v6 React Blank to create the project.
*If you don’t have Monaca installed, run:
npm install -g monaca
Before starting, we have to log in using the following command:
After logging in, we can create the project by running:
monaca create emotions_detection_app
After running it, we should choose the following options:
After clicking on Create New Project, we will choose the options shown in the following screenshot:
Project Type: Framework Templates
Framework: Onsen UI and React
Template: Framework7 React Blank
Firstly, we need to install one dependencies and 3 Cordova plugins:
Then add these 3 plugins to “package.json”
Another way of installing the Cordova plugins is by doing it in the Monaca Cloud IDE.
After installing dependencies and plugins, the “package.json” should look like something as below.
The version might be different when you install.
Installing on Monaca Cloud IDE
Click on the blue plus button and select New Terminal. You can type the above commands into the terminal.
Now we need to import face-api models, which are pre-trained for recognizing face characteristics (age, gender, …). Download models from here and put them in a new directory: “./public/models”.
To make sure that the models are transferred into the output directory after build, add publicDir: ’./public’ to the exported object in “./vite.config.js”.
Go to “./js” and create a file named “hybridFunctions.js”. Start with three helper functions to detect the type of device:
The app will be divided into these pages:
The main page of the app. Here is where the recognition will take place. It will work both in the browser and the mobile phone.
Here is the code to render the layout
The setting up and prediction code will be common for both of them. Here is setting up:
The function init will be launched first and will call the loadModel function, which as its name says, will load the model. The startVideo function will start predicting differently depending on the device (mobile or not).
And here prediction:
On Line 3, the “start” function toggle the camera and the button caption. The “predictEmotion” function, on line 17, is started after camera is capturing as the interval (every 5 seconds). The “predicting” function is invoked which we use “faceApi” module to predict the emotion from the camera stream. The library return several detected emotions (happy, angry, surprised, …) with each confidence value (probability score). We then filter and return the emotion with the high score in “getDetectedEmotion” function. Finally, we display the result on the page in the “setEmotionResult” function. If the is some prediction error or it could not detect the face, we increase the error count and stop the prediction in the “useEffect” hook defined earlier.
To make the camera recognise emotions on the phone, we will be using the Cordova Canvas Camera plugin. A workaround must be implemented to get the stream of pictures. After the Canvas Camera is started, the stream is sent to “readImageFile” function, which reads the storage of the phone where the images are being temporarily saved, and therefore we can access them later in code.
Please note that android and iPhone have different file paths and protocols.
For the browser, we will be using navigator.mediaDevices.getUserMedia.
This page will allow the user to customize the time interval between camera predictions, allow vibrations, set the FPS of the camera and set between front/back camera. The settings are saved in localStorage after the user clicks on the Save button and then are retrieved on the Home page.
To style the app, feel free to copy styles from “./css/app.css”. After those steps, the app should be ready for testing.
Run the application
If you were developing locally, you can run “npm run dev” to preview the app on the browser. The url should be htpp://localhost:8080.
To access it on a mobile phone, we first need to deploy it with Monaca.
If you were developing locally, run the “monaca upload” command to upload it to Monaca Cloud IDE and open the project.
From now on, the process will look differently for Android and iOS.
You will need to build the application and install on the phone to test it. You will use Monaca Cloud to build the application and use Monaca Debugger to test and debug purpose.
After open project in Monaca Cloud, go to Build -> Build App For Android -> Build for Debugging -> click on Custom Build Debugger and Start Build. After this process, you can download and install your app and test it on your phone.
The step is the same for iOS but Apple require you to have a certificate.
You can import this app right away to your Monaca project with this link. You will need “Pro Plan” account for this project as it uses the custom Cordova plugins.
In this article, we could learn how to create an AI-powered emotions detection application using Monaca, React and Framework 7, as well as Tensorflow. You can find the code here.
I hope that you could understand every step easily and please, contact me if there is any question or comment.
Good luck and enjoy coding :)