Bridge tools for machine learning frameworks

Adam Czapski
Jit Team
Published in
4 min readSep 6, 2022

--

Machine Learning tools are abundant on the market and each framework has their own format for storing a neural network. Each of them has its own way to save the model and the data, and each method differs from the others. These formats are not compatible between one another and they have to be converted to other formats for a different tool or framework.

Just take a look at some tools which output their own model formats:

  • TensorFlow/Keras — framework developed by Google
  • Keras — a high-level framework that runs on top of TensorFlow
  • PyTorch — framework developed by Meta
  • MXNet — framework developed by Amazon
  • Theano — framework developed by the University of Montreal
  • Caffe — framework developed by the Berkeley Vision and Learning Center
  • DeepLearning4j — framework developed by Skymind
  • Apache TVM — framework developed by the Apache Software Foundation. Apache TVM has its own file format called the ONNX format.
  • PaddlePaddle — framework developed by Baidu

When it comes to formats, there is no rigid standard. Various companies and groups are vying to establish the standard. However, there are tools such as Open Neural Network Exchange (ONNX), Neural Network Exchange Format (NNEF), and Open Visual Inference and Neural Network Optimization (OpenVINO) that act as a bridge between these frameworks.

Open Neural Network Exchange (ONNX)

ONNX is an open format used to represent machine learning models. The format is supported by many frameworks and tools, and can be used to exchange models between different frameworks. It was developed by Facebook AI Research, and is an open format.

Source: https://onnx.ai/supported-tools.html

Nevertheless, ONNX is still too immature to live up to high hopes of being a silver bullet for porting models from one framework to another because:

  • depending on the model’s architecture and implementation, converting complex models to ONNX may be difficult and necessitate modifying the code
  • converting models between frameworks is not straightforward and ONNX model import is not supported by the majority of frameworks
  • you might need to use other software if you wish to convert a model from ONNX to another format

Neural Network Exchange Format (NNEF)

NNEF is a machine learning model specification format that defines a common data structure for representing machine neural networks. The format is intended to provide compatibility between different machine learning frameworks and tools.

Source: https://www.khronos.org/nnef

NNEF and ONNX are two comparable open formats with some differences juxtaposed below.

Source: https://www.khronos.org/blog/nnef-and-onnx-similarities-and-differences

Read Khronos Blog to learn more about the differences between NNEF and ONNX.

Similar to ONNX, some conversions may not work out of the box and further tinkering will be indispensable to make your solution work.

Open Visual Inference and Neural Network Optimization (OpenVINO)

The OpenVINO™ Toolkit is an open source product that enables inference on the edge. However, since it is Intel’s product, the caveat is that the toolkit allows for the integration with Intel hardware specifically. If that’s something you’re willing to accept, you’re in for some seamless experience and a big bulk of work cut out for you.

Source: https://docs.openvino.ai/latest/index.html

The OpenVINO toolkit is a collection of high-performance model acceleration libraries and tools, such as GPU support, CPU support, Inference Engine API integration, deployment, training and evaluation. Developers can use the OpenVINO toolkit to select models, deploy them through the C++/Python Inference Engine API, and integrate the API with application logic. In that way, OpenVINO integrates a series of functionalities for automating the development of applications and solutions that use computer vision, automatic speech recognition, natural language processing, and more.

One of the easiest ways to see and test ride the capabilities of OpenVINO is to run a selection of Jupyter notebooks. They are ready to use so that you can learn about and play with the OpenVINO Toolkit straightaway. The notebooks give developers an overview of OpenVINO fundamentals and show them how to use the API for optimized inference.

Conclusion

There are various open source frameworks and tools for building neural networks. This may sound like very good news but vying standards often cause a headache. And this is exactly what data scientists have to deal with. However, with the bridge tools I listed in this post, data scientists have more flexibility to deploy their models in the field regardless of the leveraged framework.

--

--