Edited 2019 Mar 11 to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found here.
Training and building complex models can take a considerable amount of resources and time. Some models require massive amounts of data to provide acceptable accuracy. And, if computationally intensive, may require hours or days of training to complete. Thus, you may not find the browser to be the ideal environment for building such models.
A more appealing use case is importing and running existing models. You train or get models trained in powerful, specialized environments then you import and run the models in the browser for impressive user experiences.
Converting the model
Before you can use a pre-trained model in TensorFlow.js, the model needs to be in a web friendly format. For this, TensorFlow.js provides the
tensorflowjs_converter tool. The tool converts TensorFlow and Keras models to the required web friendly format. The converter is available after you install the
tensorflowjs Python package.
tensorflowjs_converter expects the model and the output directory as inputs. You can also pass optional parameters to further customize the conversion process.
The output of
tensorflowjs_converter is a set of files:
model.json— the dataflow graph
- A group of binary weight files called shards. Each shard file is small in size for easier browser caching. And the number of shards depends on the initial model.
NOTE: If using
tensorflowjs_converterversion before 1.0, the output produced includes the graph (
tensorflowjs_model.pb), weights manifest (
weights_manifest.json), and the binary shards files.
Run model run
Once converted, the model is ready to load into TensorFlow.js for predictions.
Using Tensorflow.js version 0.x.x:
Using TensorFlow.js version 1.x.x:
The imported model is the same as models trained and created with TensorFlow.js.
Convert all models?
You may find it tempting to grab any and all models, convert them to the web friendly format, and run them in the browser. But this is not always possible or recommended. There are several factors for you to keep in mind.
TensorFlow.js does not support all TensorFlow operations. It currently has a limited set of supported operations. As a result, the converter will fail if the model contains operations not supported.
Thinking and treating the model as a black box is not always enough. Because you can get the model converted and produce a web friendly model does not mean all is well.
Depending on a model’s size or architecture, its performance could be less than desirable. Further optimization of the model is often required. In most cases, you will have to pre-process the input(s) to the model, as well as, process the model output(s). So, needing some understanding or inner workings of the model is almost a given.
Getting to know your model
Presumably you have a model available to you. If not, resources exist with an ever growing collection of pre-trained models. A couple of them include:
- TensorFlow Models —a set of official and research models implemented in TensorFlow
- Model Asset Exchange —a set of deep learning models covering different frameworks
These resources provide the model for you to download. They also can include information about the model, useful assets, and links to learn more.
Another option is Netron, a visualizer for deep learning and machine learning models. It provides an overview of the graph and you can inspect the model’s operations.
To be continued…
Stay tuned for the follow up to this article to learn how to pull this all together. You will step through this process in greater detail with an actual model and you will take a pre-trained model into web friendly format and end up with a web application.