Creating native frame processors for Vision Camera in React Native using OpenCV

Lukasz Kurant
dogtronic
Published in
10 min readAug 17, 2022

Introduction

The change of architecture in React Native to Fabric, and thus the use of synchronous communication between the native part and JS, has made it possible to create libraries that lead to great performance, unattainable before. One of the libraries enabling such communication is react-native-reanimated, which I will use together with react-native-vision-camera to create an application that enables real-time detection of elements from the device’s camera using the OpenCV library.

OpenCV is a cross-platform and open-source library for image processing, developed in C++, but with ports for other languages like Java, JavaScript, and Python. Currently, however, there is a lack of meaningful libraries for React Native, which would make it possible to easily use OpenCV functionality directly in JS code. With help, however, comes the ability to use native code and create communication between the native thread and the JS thread. The standard approach, described in some posts, allows you to create a bridge with which communication will take place asynchronously. However, this is an approach that often doesn’t work efficiently enough in a way that makes detection or transformations in real-time.

Details about the OpenCV Library can be found here: https://opencv.org. Information about react-native-vision-camera and native frame processors can be found here: https://mrousavy.com/react-native-vision-camera/docs/guides/frame-processors

Important note: The entry was created using OpenCV 4.6.0 and React Native 0.68.2. For other versions, especially the steps for importing OpenCV into the project may differ.

Creation of the project

The first step will be to create a new application using the command:

npx react-native init opencvframeprocessor

After installing the necessary pods and creating directories, we proceed to import OpenCV into our project, separately for iOS and Android.

Importing OpenCV for iOS

The first step would be to go to https://opencv.org/releases/ and download the OpenCV version for iOS. In my case, it is version 4.6.0.

OpenCV website — iOS Pack

After downloading the library, we run our project in Xcode (remember to make it a project with the .xcworkspace extension). To import the library, we drag the downloaded directory named opencv2.framework into the main project (left panel of the window).

Importing OpenCV in Xcode

Then check the “Copy items if needed” option and click Finish. The library should appear in the panel on the left side of the window.

The next step, will be to attach the required frameworks to the project. We can do this in the project settings -> Build Phases -> Link Binary With Libraries. The following list of elements should be added to the project:

  • QuartzCore.framework
  • CoreVideo.framework
  • CoreImage.framework
  • AssetsLibrary.framework
  • CoreFoundation.framework
  • CoreGraphics.framework
  • CoreMedia.framework
  • Accelerate.framework
Required libraries in Xcode

The next step will be to create OpenCV support files in the project. We create the files in the root directory of the project (where the AppDelegate.h and AppDelegate.m files are located).

First, we create a new file with our file header — let’s call it OpenCV.h.

Creating a header file

In it, we declare a new class and an example method to retrieve the OpenCV version. To do this we use the following code.

Next, we create a new Objective-C file named OpenCV.m.

Creating Objective-C file

Since the OpenCV library is written using C++, it will be necessary to change the format of the created file to .mm format (in other words, Objective C++). We can do this in the right panel (by appending the format to the file name).

Changing the file format

Then, in the created file, we create code that implements the class from the header file.

The next step will be to create a PCH file in which we will add information that the OpenCV library will require a compiler for Objective C++. To do this, we will add a new PCH file named PrefixHeader in the location of the other previously created files.

Creating the PCH file

And we set him the following content:

Next, in the project settings, we need to indicate the location to it. In Build Settings -> Prefix Header, we add an entry that reads: ${PROJECT_DIR}/PrefixHeader.pch.

Localization of PCH File

After that, we check if the application builds — if it does, our library has been added correctly and we can move on to the next steps.

Importing OpenCV for Android

To download the OpenCV library for Android, we return to https://opencv.org/releases/ however this time we select the Android package.

OpenCV webpage — Android package

After downloading and unpacking the archive, we open our project in Android Studio. The first step to import our module will be to select File -> Import module and point the location to the /sdk directory (Note! this will not be the sdk/java directory). We name the library, e.g. openCVLib, and leave the other options to the default.

Importing module in Android Studio
Selecting the sdk path

Next, we need to add support for the Kotlin language. In the build.gradle file, we add the following:

We proceed to add the library as a dependency for the project. From the File menu, select Project Structure.

Project Structure option in File menu.

We go to the Dependencies tab and click the + icon selecting Module Dependency. In the next step, we select our OpenCV library and add a dependency.

Adding module dependency to app module.

The next step will be to add jniLibs files to our application. In the app/src/main directory, we create a jniLibs directory and copy there the contents of the sdk/native/libs directory from the previously downloaded archive.

jniLibs files location.

In the app/build.gradle file, we add the following line to fix an error when building the application.

Next, we need to check if the library has been imported correctly. In the MainActivity.java file, we import the library package.

And in the MainActivity class we add a static field:

After building and starting the application, the logs should show a message that OpenCV is loaded.

Installation of required libraries

As I mentioned earlier, the next step will be to add the Vision Camera and Reanimated libraries to our project. To do this, we run the commands in the root directory of the React Native project:

yarn add react-native-vision-camera react-native-reanimated

npx pod-install

To properly add the library and add the necessary permissions, go through the installation process described here: https://mrousavy.com/react-native-vision-camera/docs/guides.

Frame Processor for iOS

In order to create a new frame processor for the Vision Camera library, it is necessary to create a file in which we sew the logic. However, before we do that, we need to extend our OpenCV.mm file with functions for object detection. In our case, it will be the detection of a blue square. Frame processor by default returns us a frame from the camera in the form of an object with type CMSampleBufferRef, and therefore it will be necessary to prepare a function that will allow us to convert it to a standard image used in iOS with type UIImage. We can do this with a function (let’s add it in the OpenCV class in the OpenCV.mm file):

The OpenCV library performs operations on so-called matrices. Hence, we will need a function that will allow us to convert a UIImage to a Mat object. We can do it, for example, in the following way:

Next, let’s add a function in which the detection of blue objects will be implemented.

Our detection will proceed as follows:

  • We will convert the image, which is saved in RGB format by default, to BGR and then to HSV.
  • Based on the range, we will cut only the color we are interested in (it will be blue).
  • We will detect the contours of the blue elements.
  • The first one larger than the specified value will be our detected element, so we will return its position and size.

So to begin with, let’s specify our value ranges. For the blue color, these will be, for example:

Next, let’s perform the necessary color transformations.

Next, let’s detect our outline, and return in the form of an NSDictionary object (so that it can be picked up on the JS side).

In the absence of such elements in the frame, return an empty object.

We need to add the functions we added to the header file, i.e. OpenCV.h. After the changes, it will look as follows.

Next, we need to create a new file with our Frame processor. Let’s call it ObjectDetectFrameProcessor.mm and add the following code to it.

Such added code, we can already use in JS code. But first, let’s add similar functionality for Android as well.

Frame processor for Android

The default format returned by the Vision Camera library in Frame processor for Android is ImageProxy. To add support for it in the app/build.gradle file in the dependencies section we need to add:

Let’s add an OpenCV.java file that will contain the findObject function, which will be responsible for detecting blue objects. In addition, let’s add a helper method to convert an ImageProxy object to a Mat object.

To add Frame Processor, we need to create ObjectDetectFrameProcessorPlugin.java file with the following content:

And ObjectDetectFrameProcessorPluginModule.java:

Then the module must be registered. In the MainApplication.java file under the line:

We add a new entry with our module.

Thus prepared module is now ready to be used in JS code.

Use of Frame processors on the JS side

To enable the use of the frame processor on the application side, we need to add the ability for the react-native-reanimated plugin to detect it. To do this, we need to add the appropriate entry in the babel.config.js file (located in the application’s root directory).

The name __objectDetect is not accidental, we have given the same one in the native code of our processors. We just add the “__” characters at the beginning of the name.

Let’s move on to the App.js file. First we need to declare our function responsible for calling the native code.

In the App component, we then add our code. First, let’s start by declaring a place to store the parameters of the detected square.

By using the useSharedValue hook, we can pass position and square size values directly to the style using the useAnimatedStyle hook. Both come from the react-native-reanimated library.

It is also important to check the camera permissions, without this we will not be able to run the camera.

Let’s move on to the declaration of the frame processor. Once the object is detected, we need to convert the position and size values from the frame from the camera to the screen resolution sizes of the device (for the reason that they are different). Due to the fact that the frame size is given inversely on iOS than on Android we have to do the size conversion.

Next, our component must return a <Camera /> component and an animated square.

The entire file looks as follows:

Results

Let’s check how our code works on both systems. Let’s start with iOS.

The result of running on iOS.

On Android, on the other hand, our application works as follows:

The result of running on Android.

Summary

The process of importing the OpenCV library and using it for real-time object detection is not an easy task. The multitude of versions and the way they are used, often provides many problems difficult to solve. Nevertheless, the result is sufficient reward for the journey. Unfortunately, the main problem with using OpenCV in React Native applications is the need to create Native code whether in Java (or Kotlin) in the case of Android, or Objective C/C++ (or Swift) in the case of iOS.

You can find the full repository with the code here: https://github.com/dogtronic/blog-opencv-frame-processor

References

https://brainhub.eu/library/opencv-react-native-image-processing, great post containing how to import OpenCV into React Native, but unfortunately not up to date for newer versions of the library.

https://opencv.org, the home page of the OpenCV library, including documentation and usage examples.

https://mrousavy.com/react-native-vision-camera/docs/guides/frame-processors, documentation including examples of using frame processors.

You can find the Polish version of the article here: https://dogtronic.io/tworzenie-natywnych-procesorow-klatek/

Find us on our Dogtronic website.

--

--

Lukasz Kurant
dogtronic

Fullstack Developer interested in solving difficult problems.