Selfie2Mood model for React Native (iOS) from scratch

Comprehensive tutorial on how to build mood prediction models to deploy on device

Steven Chand
doc.ai
5 min readApr 24, 2019

--

In this tutorial we will show how you can easily build a Selfie2Mood model that takes a picture of a face and predicts the mood. We will do this using TensorIO, which is a lightweight library we developed at doc.ai to help us rapidly deploy models to mobile devices.

The library is composed of two parts, a declarative component which consumes a JSON description of a machine learning model and processes its inputs and outputs, and an imperative one which calls into an underlying machine learning library to actually run a model. Getting your own models working on mobile phones is often as simple as describing their inputs and outputs in JSON and writing a few lines of JavaScript. Which is exactly what we’re going to demonstrate in this post.

Special thanks to Philip Dow for developing TensorIO and contributing to this tutorial.

A preview of what we will build!

Prerequisites

For your reference, the complete source code can be found here: https://github.com/doc-ai/Selfie2Mood

Getting Started

The following instructions will guide you step-by-step through the process of creating the Selfie2Mood App.

Let’s start by initializing a base React Native app:

react-native init Selfie2Mood

Then enter into the Selfie2Mood directory:

cd Selfie2Mood

Let’s start by adding react-native-tensorio to our App. This library allows us to call the native TensorIO library, from React Native:

yarn add react-native-tensorio

Now, execute the following command to link the TensorIO library:

react-native link react-native-tensorio

Cool! Since the Mood model needs an image of the as the input, we’ll use the iPhone camera to take a picture; we can then use the picture as the input for our Mood model.

Add react-native-camera to our App:

yarn add react-native-camera

Since we want to use the Camera on iOS, we have to prompt for the User’s permission. Therefore, we have to define some messages to display to the User, when asking for Camera permission.

To do this, open the following file in your code editor:

ios/Selfie2Mood/Info.plist

In Info.plist, add the following keys to the file, inside of the <dict> element, as so:

<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"><dict>
<key>NSCameraUsageDescription</key>
<string>Allow Camera access to predict your mood</string>

...
...
...
</dict>

Cool, next we’ll install the native TensorIO via Cocoapods. This pod is required to run our model. First, we’ll initialize the Podfile in the ios directory:

cd ios && pod init

Now, open Podfile in your code editor, and change the contents to look like this:

platform :ios, '10.0'target 'Selfie2Mood' do
pod 'TensorIO'
pod 'TensorIO/TFLite'
end

Then, run the following command to actually install the pods and change back to our project root directory.

pod install && cd ..

Now open our workspace in XCode:

open ios/Selfie2Mood.xcworkspace

In XCode, right-click on the Libraries folder and select “Add Files to Selfie2Mood”, then select:

../node_modules/react-native-camera/ios/RNCamera.xcodeproj

This will add the react-native-camera into our workspace. Now let’s select the library for linking.

In XCode, go to the General tab, find the section Linked Frameworks and Libraries, then click on the “+” button. In the pop-up dialog, select libRNCamera.a

Next let’s add our Mood model into XCode. You can download it here. Once you’ve downloaded it, be sure to extract it. Once extracted, you will have something a directory called happy-11.tfbundle

Right-click on the Selfie2Mood base project and and select “Add Files to Selfie2Mood”. Select the happy-11.tfbundle directory, and make sure check the “Copy items if needed” checkbox.

The App Code… finally!

Now, we can add the React Native code for our app. Open App.js in your code editor, and change the contents to look like this:

In a nutshell, we render a front-facing (selfie) Camera component and a “Take Picture” button. When the button is pressed, we use the camera to capture a picture. We then send that picture to TensorIO, which has been loaded with our Mood model. The Mood model processes the picture, and outputs a prediction of the User’s mood!

Finally! We’re ready to run the app on our device!

Connect your iPhone to your Mac. On XCode, select it on the Device list, and click on the Run button!

And that’s it! You’re now ready to run your own machine learning models on mobile phones. Once you’ve trained a model it’s as simple as describing its inputs and outputs in JSON and then calling into it with a few lines of JavaScript.

With TensorIO and its React Native bridge you get machine learning at native speeds with a React Native interface. TensorIO is currently available for iOS and Android with a React Native bridge for iOS. The React Native bridge for Android is in development.

The library currently supports a TensorFlow Lite backend and is extensible to other machine learning frameworks, all using the same JSON and JavaScript interface.

TensorIO is 100% open source. Learn more about the library and see more examples at the TensorIO Homepage.

If you have questions, feedback or want to work with us on cool AI related problems — shoot us an email to info@doc.ai!

--

--