Speed-Up Pre-Trained TensorFlow Models on tinyML devices

Like Micro-Controllers Using deepC

Rohit Sharma
AITS Journal
Published in
2 min readApr 15, 2020

--

deepC Inference Framework and Compiler (github.com/ai-techsystems/deepC)

What’s DNN Compiler?

Deep Neural Network Compiler (DNNC) is an AOT Compiler and inference framework. Part-1 and Part-2 of this article series showed how to use DNNC as an inference framework. This article shows how to use DNNC to compile a machine learning model on a micro-controller or microcomputer worth ranging from 10¢ to $5.

And what is a deep learning model?

A deep learning model is a collection of programs, trained parameters (weights and biases), and associated computer elements. It could be stored as a python program or stored in a format like onnx and TensorFlow protobuf. Formats like onnx and TensorFlow protobuf are simple file formats that can not be executed on any computer.

Converting keras model to .pb

As easy as step 1, 2 , 3.

Step 1

Download this code (link here) to convert Keras model file to the TensorFlow model file in .pb (protocol buffer) format.

--

--