Integration of TensorFlow JS model with Angular Application

Kush Hingol
Analytics Vidhya
Published in
5 min readNov 30, 2019

This post is regarding the integration of various ML models with angular to create some interesting machine learning web apps. So with the help of TensorFlow Js we can run the machine learning programs entirely on the client-side in the browser for faster prediction with a very low latency.

So let’s get started. Initial step is to convert the existing saved_model or frozen model to model.json. For conversion we’ll be using tensorflowjs_converter

Step 1: Model Conversion.

Convert your existing model by first installing TensorFlow Js by using the following command

$ pip install tensorflowjs

The best thing about TensorFlow Js is that it’s independent of the type of the model. It is compatible with both saved model as well as frozen model for conversion.

For Saved model: Run the following command.

$ tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--saved_model_tags=serve \
/mobilenet/saved_model \
/mobilenet/web_model

Note: where /mobilenet/saved_model is the input_path directory where the saved_model.pb file along with it’s weights are present and /mobilenet/web_model is the output_path where the converted model.json and it’s shard files will be stored.

For Frozen model: Run the following command

$ tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
/mobilenet/frozen_model.pb \
/mobilenet/web_model

Note: where /mobilenet/frozen_model.pb is the input_path and /mobilenet/web_model is the output_path where model.json will be stored after conversion.

Note: if you encounter an error of NonMaxSupressionV5 during model conversion then try to retrain the model by downgrading the tensorflow version to 1.14.0 and then again follow the same process for model conversion.

For successful model conversion the following are the requirements:
1. tensorflow v_1.14.0
2. tensorflowjs v_1.3.2

Let’s integrate the model with angular

We assume that the required components are created using angular-cli.

Step 2: Install TensorFlow Js on the client-side.

$ npm install @tensorflow/tfjs -- save

Step 3: Place the model.json file along with it’s generated shard files in assets folder.

Step 4: For prediction let’s add a canvas and a video element in component’s HTML.

<video #videoCamera hidden id="video"></video>
<canvas #canvas id="canvas"></canvas>

Define their width and height according to the requirement.

Add ViewChild decorator to get the reference of the element and also add the camera configuration.

@ViewChild('videoCamera', {static: true}) videoCamera: ElementRef;
@ViewChild('canvas', {static: true}) canvas: ElementRef;

The below code block is for initializing the camera and it’s configuration.

Step 5: Now it’s time to load the model and warm it up.

Here we have used tf.loadGraphModel() instead of tf.loadFrozenModel() because TFJS has deprecated the tf.loadFrozenModel() function in it’s latest update. The latest TFJS version supports two function for loading the model

  1. tf.loadGraphModel()
  2. tf.loadLayersModel()
The fun fact
TensorFlow Js has just renamed the function names.
tf.loadFrozenModel() ---------> tf.loadGraphModel()
tf.loadModel() ----------> tf.loadLayersModel()

Step 6: Pass continuous video stream to the model for prediction.

First we need to pre-process the image by casting the canvas image to float32 format. The pre-processed image is then passed in the model for prediction. The model returns tensor as an ouput which we need to convert it into the array format. Below is the code block for predicting continuous image frames.

async predictFrames(video,model) {     const image = tf.tidy(() => {
let img = this.canvas.nativElement;
img = tf.browser.fromPixels(img);
img = tf.cast(img, 'float32');
return img.expandDims(0);
}
const result = await this.model.executeAsync(image) as any;
const prediciton = Array.from(result.dataSync());
if(predicion.length > 0) {
this.renderPrediction(prediction);
} else {
const canvas = this.canvas.nativeElement;
const ctx = canvas.getContext('2d');
canvas.width = 350;
canvas.height = 450;
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.drawImage(video, 0, 0, 350, 450);
}
requestAnimationFrame(() => {
this.predictImages(video, model);
});
}

The predictImages function is called recursively for making continuous prediction.

Step 7: Draw the boundary box for the prediction.

renderPredictions(predictions: any) {
const canvas = this.canvas.nativeElement;
const ctx = canvas.getContext("2d");
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.textBaseline = "top";
ctx.drawImage(this.videoCamera.nativeElement, 0, 0, 350, 450);

predictions.forEach(prediction => {
const x = prediction.bbox[0];
const y = prediction.bbox[1];
const width = prediction.bbox[2];
const height = prediction.bbox[3];
// Boundary box
ctx.strokeStyle = "#EB3434";
ctx.fillStyle = "#EB3434";
ctx.lineWidth = 3;
ctx.beginPath();
ctx.rect(x, y, width, height);
ctx.stroke();
// label
const textWidth = ctx.measureText(prediction.class).width;
const textHeight = parseInt(font, 10); // base 10
ctx.fillRect(x, y, textWidth + 4, textHeight + 4);
}); // rendering the classname at the end to draw over the rect
predictions.forEach(prediction => {
const x = prediction.bbox[0];
const y = prediction.bbox[1];
if (prediction.class) {
ctx.fillStyle = "#FFFFFF";
}
ctx.fillText(prediction.class, x, y);
});
}

Thus we have successfully integrated the model.json with angular. The main benefit to load the model in the browser is to reduce the latency and the transmission time.

Note : Different model returns different output in the form of tensor.

So here is the explanation

Following image of the console is an example of output from the model

This both tensor’s indicates a lot of things but currently let’s focus on the shape. We can find the difference in both array. The first tensor contains box classification score with shape of [1, 1917, 2] and the second tensor contains box location with shape of [1, 1917, 1, 4].

where 1917 = the number of box detectors;
2 = the number of classes;
4 = the number of co-ordinates for the box.

Following is the link of complete code block: https://github.com/kushhingol/tfjs-integration

Summary

  1. In this post initially we discussed about the conversion of existing .pb model to model.json using tensorflowjs.
  2. The second part consist of the integration of the model.json with angular using tensorflowjs.
  3. Discussed about the output of model in the form of tensor and also it’s various indication and shape.
  4. The main cause: The main cause and the benefit to load the model on the browser itself is to eliminate the latency and the data transmission delay. Now with TensorFlow Js it’s completely possible to convert your existing model to the javascript version with help of tfjs converter. Many application’s prefer this technique to achieve faster prediction and accurate result when it comes to real time prediction.

--

--

Kush Hingol
Analytics Vidhya

Software Engineer. AWS Certified Solution Architect — Associate. Coding is passion. Cloud is love.