Object Detection in React Native

Seamless object detection using ML Kit and TensorFlow

Chaitanyaa Adaki
Simform Engineering
7 min readDec 11, 2023

--

Object detection can sound intimidating with all its technical aspects, but fear not! We’re about to learn how to make our React Native app understand and recognize different objects.

ML Kit: A Powerful Tool

ML Kit is a mobile SDK developed by Google that provides easy-to-use machine-learning capabilities for mobile applications. It supports tasks like object detection, text recognition, image labeling, face detection, barcode scanning, and more, it allows developers to integrate various machine-learning models into their Android and iOS apps without requiring an in-depth understanding.

TensorFlow

TensorFlow is a Google-developed, open-source machine learning framework with a mobile and embedded device optimized version — TensorFlow Lite.

Designed to run efficiently on mobile devices, TensorFlow Lite is ideal for object detection tasks within mobile applications.

Preparation

Before we start, ensure your development environment is set up with:

  1. Node.js and npm installed on your development machine
  2. React Native development environment set up
  3. Android Studio and Xcode for Android and iOS development, respectively
  4. A basic understanding of React Native

Understanding Object Detection Models

We have two models to detect objects:

  • Custom Model
  • Base Model

Custom Model

Custom Model, as the name suggests, is built from the ground up or trained from scratch. This means that you collect your own dataset and train the model to recognize specific objects or patterns of interest. You have full control over the architecture and training process.

Custom Model Implementation

First, create a React Native project with the init command.

npx react-native init ObjectDetectionApp

Android Configuration Using a Custom Model

Create a file with the name CustomObjectDetectionModule or another suitable name.

In this module, we will create a class CustomObjectDetectionModule, that will have a method for image processing and object detection.

Step 1: Open the Android folder in Android Studio and add the ML Kit Android libraries to your module’s app-level gradle file, which is usually app/build.gradle.

dependencies {
// Object detection & tracking feature with custom bundled model
implementation 'com.google.mlkit:object-detection-custom:17.0.0'
}

Step 2: Integrate Custom Models in Android Studio:

Find a TensorFlow Lite (.tflite) model you want to use. There are thousands of public models available here, or you can use your own model as per your requirements.

In Android Studio, create the folder first by right-clicking the app/ folder, then clicking New > Folder > Assets Folder.

Drag your TensorFlow Lite model into your React Native app’s asset folder (e.g., assets/my-model.tflite).

To bundle the model with your app, follow the below steps:

android {
// ...
aaptOptions {
noCompress "tflite"
// or noCompress "lite"
}
}

Step 3: Create a LocalModel object, specifying the path to the model file:

LocalModel localModel =
new LocalModel.Builder()
.setAssetFilePath("model.tflite")//path to model file
.build();

Step 4: Configure the object detector:

// Multiple Object detection
CustomObjectDetectorOptions customObjectDetectorOptions =
new CustomObjectDetectorOptions.Builder(localModel)
.setDetectorMode(CustomObjectDetectorOptions.SINGLE_IMAGE_MODE)
.enableMultipleObjects()
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.setMaxPerObjectLabelCount(3)
.build();

ObjectDetector objectDetector =
ObjectDetection.getClient(customObjectDetectorOptions);

//If you want to Live detection and tracking use stream mode
CustomObjectDetectorOptions customObjectDetectorOptions =
new CustomObjectDetectorOptions.Builder(localModel)
.setDetectorMode(CustomObjectDetectorOptions.STREAM_MODE)
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.setMaxPerObjectLabelCount(3)
.build();

Step 5: Create a method for object detection and pass the image URI:

 public void startCustomObjectDetection(String imagePath) {
InputImage image = null;
try {
image = InputImage.fromFilePath(reactContext, android.net.Uri.parse(imagePath));
} catch (IOException e) {
e.printStackTrace();
}
}

Step 6: Process the input image using the object detector:

objectDetector
.process(image)
.addOnFailureListener(e -> {...})
.addOnSuccessListener(results -> {
for (DetectedObject detectedObject : results) {
// ...
}
});

Step 7: Retrieve Labels about detected objects:

for (DetectedObject detectedObject : results) {
Rect boundingBox = detectedObject.getBoundingBox();
Integer trackingId = detectedObject.getTrackingId();
for (Label label : detectedObject.getLabels()) {
String text = label.getText();
int index = label.getIndex();
float confidence = label.getConfidence();
}
}

Note: The higher the confidence, the higher the chance of accuracy for detected object labels. Below is an example of a custom model classifying objects with a custom classification model on Android.

Android Output

Object Detection Android

iOS Configuration Using a Custom Model

Create a file with the name CustomObjectDetectionModule or another suitable name. In this module, we will create a class, CustomObjectDetectionModule that will have a method for image processing/Object detection.

Step 1: Open the iOS folder in Xcode and include the ML Kit libraries in your podfile:

pod 'GoogleMLKit/ObjectDetectionCustom', '3.2.0'

Step 2: Integrate custom models in Xcode:

Copy the model file, usually ending in .tflite or .lite, to your Xcode project, taking care to select Copy bundle resources when you do so. The model file will be included in the app bundle and available in the ML Kit.

Step 3: Create a LocalModel object, specifying the path to the model file:

guard let localModelFilePath = Bundle.main.path(forResource: "Mymodel", ofType: "tflite") else {
fatalError("Failed to load model")
}

let localModel = LocalModel(path: localModelFilePath)

Step 4: Configure the object detector:

// If you only have a locally-bundled model, just create an object detector from your LocalModel object:
let options = CustomObjectDetectorOptions(localModel: localModel)
options.detectorMode = .singleImage
options.shouldEnableClassification = true
options.shouldEnableMultipleObjects = true
options.classificationConfidenceThreshold = NSNumber(value: 0.5)
options.maxPerObjectLabelCount = 3

Step 5: Create a method to pass the URI and prepare the input image:

@objc
func startCustomObjectDetection(_ image: String) {
//...
}

let image = VisionImage(image: UIImage)
visionImage.orientation = image.imageOrientation

Step 6: Process the input image using the object detector:

let objectDetector = ObjectDetector.objectDetector(options: options)

objectDetector.process(image) { objects, error in
guard error == nil, let objects = objects, !objects.isEmpty else {
// Handle the error.
return
}
// Show results.
}

Step 7: Retrieve Labels about detected objects:

for object in objects {
let frame = object.frame
let trackingID = object.trackingID
let description = object.labels.enumerated().map { (index, label) in
"Label \(index): \(label.text), \(label.confidence), \(label.index)"
}.joined(separator: "\n")
}

Below is an example of a custom model classifying objects with a custom classification model on iOS.

iOS Output

Object Detection iOS

Diving into the Base Model

Base models are pre-defined models in the ML Kit that offer simplicity with minimal configuration but limited adaptability to unique tracking or detection challenges.

Android Configuration Using a Base Model

A Java class in an Android application uses Google’s ML Kit for object detection. This class is part of a React Native module called “MyObjectDetection,” and it contains methods for configuring and using the object detection functionality provided by ML Kit.

  1. Constructor: The constructor initializes the object detector using the configureObjectDetector method.
  2. configureObjectDetector method sets up the object detector with specific options. It enables multiple object detection and classification. The configuration is created using ObjectDetectorOptions and sets the detector mode to single image mode.
  3. startObjectDetection Method: This method takes an image path as input, converts it into an InputImage object, and then processes the image using the configured object detector.
  4. The object detection results are handled asynchronously:
  • The addOnSuccessListener callback processes the detected objects, retrieves information like bounding boxes, tracking IDs, and labels, and performs specific actions based on predefined object categories.
  • The addOnFailureListener handles any errors or exceptions that may occur during the detection process and logs them.

iOS Configuration Using a Base Model

The Swift implementation of an iOS module for React Native utilizes Google’s ML Kit for object detection. It includes methods for configuring and using ML Kit’s object detection functionality in a React Native application. Here’s a breakdown of the code:

  1. In the init method, an instance of the object detector is created and configured using ObjectDetectorOptions. It enables single-image mode, multiple object detection, and object classification.
  2. startObjectDetection method accepts an image path as a string. It loads and processes the image using the configured object detector.
  3. The photoObjectDetector processes the vision image and provides the results in a completion handler.
  • If objects are detected, it iterates through the detected objects, retrieves information such as the frame and tracking ID, and, if classification is enabled, the object’s labels with their text and confidence.
  • The code also checks for and handles any errors that may occur during the object detection process, printing the error if one is encountered.

Native Bridging

Lastly, you can use all the methods in React Native by accessing the modules with Native Bridging:

For the complete source code, check out the GitHub repository.

Conclusion

In this blog, we have covered the integration of object detection into React Native apps using ML Kit and TensorFlow.

Whether to opt for a custom or base model depends on the project’s specific needs and available resources.

Remember, as Michael Scott wisely said, “Would I Rather Be Feared or Loved? Easy. Both.” The choice is yours, and it should align with your project’s requirements and resource availability.

You may also check out the official object detection documentation for further information.

For more updates on the latest development trends, follow the Simform Engineering blog.

Follow Us: Twitter | LinkedIn

--

--