TensorFlow + MAX: JavaScript Edition

Re-Packaging Open Source Deep Learning Models for JS Developers

--

If the stats in the recent 2019 Stack Overflow Survey report are any indication, there are tons of developers out there working with JavaScript -millions of us by even the most conservative estimates. In fact, for the seventh year in a row, JavaScript was the most widely used programming language with around 68% of all developers reporting having used it. With so many JS devs out there at a time when we’re witnessing rapid advancements in AI and deep learning, it makes sense that the two have come together in interesting and surprising ways. No other project has quite facilitated the merging of these two communities like TensorFlow.js.

Here at CODAIT, we’ve been fans of TensorFlow.js for a while now and have previously used it to create apps like magicat and veremin. TensorFlow.js been immensely popular in the community as well, having racked up over 10,000 stars on GitHub since being introduced in early 2018. More recently, in March of 2019 TensorFlow.js 1.0 was announced at the TensorFlow Dev Summit in Sunnyvale, CA, which brings new features and enhancements that could be just what’s needed to spark a new wave of innovation, or at least make life a little easier for JavaScript developers.

A chart showing some of the performance enhancements of TensorFlow.js v1.0

We ❤ JavaScript @ CODAIT

The popularity of TensorFlow.js, along with this recent announcement and the exciting performance gains of v1.0 strengthened our belief that this is an area worth looking into. As a result, we’ve picked our previously converted deep learning models from the Model Asset eXchange (MAX) back up, and this time have shifted our focus to finding the easiest, most efficient way to package models for use in JavaScript applications.

We wanted a solution that would feel intuitive, both in the browser for the web and while doing Node.js development. To do so, we knew we’d need to create a module that was consumable in different ways and required the least possible amount of setup to get started with.

We looked at a variety of tools and methods, but in the end, we settled with a solution that should be very familiar to most JS developers — npm. Read on to get more details about what’s contained in the module and how to use it, but if you’re eager to get started the MAX Image Segmenter model is available now.

The landing page for the MAX Image Segmenter on npm.

Re-packaging models for JS developers

Some readers may be familiar with the MAX Image Segmenter model already, but for those of you who aren’t I’d recommend taking a look at the model’s page on the Model Asset eXchange to learn more. In short, the model accepts an image as input and then returns a segmentation map, which is a pixel-by-pixel object prediction for the entire image. It can look something like this:

An image segmentation map showing a human (green) standing in front of some sheep (red).

After converting the model as described in va barbosa’s previous article, we bundled it with all the pre and post-processing logic required to process user-submitted images, the TF.js code needed to run the model, and then made that package publicly available on npm, the largest registry for open-source software among JS developers.

Based on my experience, I see 3 major benefits to packaging and consuming a model this way:

  1. Less code needed to process user input or clean the model’s output.
  2. No need to worry about environment set-up, since everything is included.
  3. Model assets are downloaded and run directly on the client, rather than across the web

There’s plenty of other reasons adopting a full JavaScript stack can be nice for app development, but I want to highlight how these points make the model easy to use in a web app example and a Node.js CLI utility that we recently worked on here at CODAIT.

Usage for the web

Using this new model format for the web couldn’t be simpler. Thanks to jsDelivr, an awesome open-source CDN, all it takes are these two lines of code to import everything you need to use the model in a web app:

Two lines are all it takes to load TensorFlow.js and the MAX Image Segmenter into a web app.

This loads the model, it’s dependencies, and even get things “warmed up” so users can get the fastest responses possible once they begin submitting input. For more on changing this default behavior and other options, see the module’s README.

For a more interactive look at how easily this model can be integrated into a web app, I whipped up a quick demo on CodePen that shows off the basic functionality of the model and how to visualize its output. Click the Run Pen button to run the demo:

This demo illustrates a simple way to process user-submitted images and display the model’s prediction.

Normally, making predictions on user data requires network calls, API’s, and working with remote servers. With this new method, your app and the model both live in the browser, so communication between the two naturally becomes much simpler.

We wanted a model that could be used easily by JavaScript developers in any environment.

Usage for Node.js

JavaScript isn’t just for the browser anymore, so we wanted to make sure that the package we created for JS models would be easily consumable in a Node.js application as well.

To test our method, we went ahead and refactored a Node.js CLI utility called magicat to use this npm-installable model rather than the manually converted version it had been built with. The result was cleaner, more concise code with better performance. You can’t ask much for much more of an improvement than that!

To install the model, all it took was one command at the terminal to add it to my app’s dependencies and get everything set-up:

$ npm install @codait/max-image-segmenter

In my app, one import statement was all I needed to gain access to the model’s original functionality plus some nice JS-only enhancements. These include things like building the image pre-processing into the module, so I don’t have to convert image sizes or do any manipulation of tensors myself -just pass an image to the predict method. Even the prediction itself is enhanced, as there are now built-in convenience methods for output parsing as well.

Once installed, this line of code is all that’s required to load the model into a Node.js app.

magicat is a command-line utility for searching through images and cropping objects out of them using deep learning. At the time it was created, there weren’t a lot of similar projects out there and models had to be converted manually. In addition, all the wrapping code needed to be ported from Python to JavaScript. With models more readily available to be used in JS applications now, the next magicat-type app should be even more straightforward to develop.

Conclusion

Until recently, working with deep learning models in JavaScript could be a bit like the wild west. Now with the v1.0 updates and even stronger support TensorFlow.js models in open source projects like MAX, there’s never been a better time to get started creating deep learning applications in JavaScript. For more examples and tips on how to get started, check out the full README in the project’s github repo.

While we believe we’ve found a solution that works really well for JS development in this new module format, we want to know what you think! What kinds of applications are you creating with deep learning and JavaScript, and how would this new module integrate with them?

--

--