Using our ReactNative CoreML image recognition component

Daryl Rowland
Jigsaw XYZ
Published in
5 min readDec 21, 2017

--

This is part 2 of our 2 part series on how to use our ReactNative realtime image recognition component.

If you’ve made it to the second part of this article hopefully that means you have your own machine learning model (or are using one of Apple’s open source ones).

In this article, I’m going to explain how you can integrate that model into a new ReactNative app using our react-native-coreml-image component.

Create a new ReactNative project

First, we need to create a new ReactNative project. Open up a terminal window, browse to the folder where you want to create the project and then run the following command:

react-native init HotDogOrNotHotDog

After a few moments, your ReactNative project will be created. Browse to the folder it was created in, go to the ios sub folder, and then double click on the HotDogOrNotHotDog.xcodeproj file. This will open the project in XCode.

Install the react-native-coreml-image component

Back in the terminal, make sure you are in the root of your project and run:

This will install our component, and the link it with XCode.

Setup XCode to use a Swift

By default ReactNative projects are setup to predominantly use Objective C. As our component is written in Swift, there are a couple of steps you need to run through first before you can use the component.

First, switch back to XCode, right click on the root of your project and then choose New File…

In the window that appears, select new Swift File, click next, give it a name (can be anything as we don’t really use it, e.g. Temp), and then a message will appear asking you if you want to create an Objective C bridging header.

This bridging header is needed to allow the Swift code in the component to interface with ReactNative’s Objective C code, so click Create Bridging Header.

Next, we need to switch the Swift version to 4.0. To do this, click on the root of your project (HotDogOrNotHotDog), select the Build Settings tab, type Swift, and then find the Swift Language Version setting. Make sure it is set to Swift 4.0.

Importing the CoreML Model file

Now we need to import the CoreML model file that you created in part 1 of the article. To do this, find the file (probably called MyClassifier.mlmodel) and drag it to the root of your XCode project in XCode.

Then rename it to HotDogOrNotHotDog.mlmodelc (the mlmodelc file name is important… so make sure it doesn’t still have the .mlmodel ending too!).

To ensure this model is bundled with our application, we then need to go back to the root of the project, click on the Build Phases tab and add the HotDogOrNotHotDog.mlmodelc file to the Copy Bundle Resources section.

Camera Permissions

Next, as our app will access the iPhone’s camera we need to go to Info.plist and add a key for NSCameraUsageDescription that describes why we are using the camera.

Ok, that’s all there is setup wise in Xcode, so you can try running the project now with XCode’s run button.

Assuming everything loads ok, its time to start writing our ReactNative code.

Use the component in ReactNative

Open up the ReactNative code root in your favourite editor, and then open the App.js file.

The first thing to do is to import the image recognition component at the top of your file…

Next, clear out all of the boilerplate code ReactNative has inserted by default (the instructions, default styles, etc) and change the render method to the below…

In this code, we render our CoreMLImage component (line 15) along with an inner container that displays the recognised image type if there is one (line 17).

The code from line 2–9 looks at whether or not we have a best match classification. If we do, it formats this as either Hot Dog or Not Hot Dog.

Also on line 15, we set the onClassification event handler. This is called everytime the component gets new classification scores for the current camera view. The data that the event handler receives will look something like:

So, then our onClassification method basically looks to see if we have a classification with a confidence greater than 0.5 (you can tweak this through BEST_MATCH_THRESHOLD). If we do have something, we update the state with the best matching classification object (line 14–16).

And then to complete our App.js file we need to add a constructor:

The styles for the view:

And, finally, near the top of the file, our BEST_MATCH_THRESHOLD constant:

The code in full

Here’s the full App.js file you should have ended up with:

Run the app

Now its time to run the app. Click the run button in XCode, and make sure you run on a real iPhone (note — the camera won’t work on the simulator).

If everything works, you’ll be asked for your permission to access the camera, and then you’ll see a live view. Now you can look for hot dogs/not hot dogs!

Full source code for this example is available here:

--

--