Exporting trained TensorFlow models to C++ the RIGHT way!

Hamed MP
8 min readMar 11, 2016

--

It’s been a while since TensorFlow is open-sourced and slowly is becoming more and more popular. One of the features that TF has is the ability to define and train your model using Python API and port the learned model in C++. Until the latest version of TF it’s been a pain in the neck of how to do it. Some people find workarounds. There are some (let me be honest, it was the only one I found) tutorials which taught how to port it but the fact that their model was not trainable made a big difference that you can’t use the same approach for your trainable models.

As I said, this ability is now easier in the latest version (version 0.7 at this time) and I think it will be even easier in the next releases. Here we go, the whole code for a CIFAR-10 CNN is shared in the github repository.

Requirements

  • Install Bazel: Google’s build tool used to compile things for TensorFlow.
  • Clone the TensorFlow repo. (It will make the build process easier if you also get your hands dirty with installing the tensorflow from the source by building it.)
  • Copy the freeze_graph.py file to your project folder as it is not in the installed wheel yet.
  • Also in the past, there is an issue that the files that “freeze_graph.py” were using, i.e. “graph_util” was not in the wheel. If you get an error in “freeze_graph.py” complaining “graph_util” not found you should build the wheel yourself to get it.

General Steps

  1. Define the input and output graph in our model.
  2. Save the checkpoints. (This is important as all your trained variables reside here)
  3. Save the graph definition (raw definition, no variable). We name this ‘input_graph’
  4. Use the freeze_graph file to combine the graph structure(no. 3, input_graph) with the values of each nodes values(no. 2) and generate a new graph model as ‘output_graph’
  5. Use the output graph in the C++ file to do the inference. #Optional: We can also map the network outputs to the labels.
  6. Create a build file and build the file in the ‘tensorflow/tensorflow/{our_project}’ folder. this takes a while for the 1st time as it bundles all tensorflow stuff into a single runnable file.
  7. Your executable file is ready!!!

Now let me go deeper with each step with real-world example. I have written the model myself(of course by getting some code snippets from tensorflow examples but the overall architecture is unique)

Step # 1

Here we tell which node is the input node in our graph and which is the output. These are needed as in the C++ code when you provide a sample to do inference on it, the code should know where to insert the sample in the graph. Also after inference, it should know which node(s) results it should report back to you.

Step #2

In your training loop, save the checkpoints every few steps, let’s say 10,50 or 100. It depends on what you are trying to get out of the model. For making everything smooth let’s define following concept(which is from the tensorflow example) to make it easier to load it later.

Here the 2 important points are:

  1. The global_step is set to 0, this makes we have only one checkpoint file named ‘saved_checkpoint-0’. The other way to do it is giving the current step instead of Zero but this causes us to have 5 last checkpoints each time which the names are changing and make it harder to load later.
  2. The latest_filename is set to a constant name.

Step #3

Save the graph definition once before the loop starts or exactly before the next step.

Step #4

As told before, you should have freeze_graph.py file beside your project and import it inside your training file. Then it’s really easy to get the “output_graph”. For more example of how to use it, you can refer to “freeze_graph_test.py” file. To summarize, It will combine the graph structure with the values from checkpoint file into one file, so when you import it to your C++ code, it has both your network architecture and the value of your trained variables.

Note: As I wrote in the comment in the code, in my first trial to do this I simply put the raw name of the output_graph, but when testing for this tutorial in the new project, I got error that output_graph is not in the node names. By debugging and checking all the node names, I found that the name in the second project is “Dense2/output_graph”. Consider first trying the real name. If you get the same error, then write the full name of the node.

When running this, you will get something like this feedback if everything goes well

Step #5

I use the C++ code provided by tensorflow and modified it for my project. It is well written (like the tensorflow itself and also other Google’s code :) ). Also grab the BUILD file which we need it too.

As we need to build the file exactly in the same root folder, for convenience duplicate the “label_image” folder, name it as you want(I named it “ccifar”) and continue the followings inside the new clone of that folder.

So let’s start modifying it according to your needs. I won’t copy paste it here again, but I will tell the line numbers you SHOULD change for your project to make everything goes smooth. Consider giving the full path to your files instead of relative path. It is also suggested to put them in this style. That is having your BUILD file across your main.cc file and a data folder where your extra files as output_graph.pb, labels_mapping.txt, and your test image reside.

Let’s modify these lines:

  • Line #237: This i the file you feed into your model for inference and get the result. You can override it via giving command line argument. More on that later in the running section.
  • Line #238: Insert the path to the output_graph.pb we got in the Step #4 here.
  • Line #241: You can give a text file with the labels for each output to make it easier when running. It will automatically assign the first labels to the output of the model when it is 0, the second line when the result is 1 and so on. For example, for CIFAR-10 it looks like this:
airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck
  • Line #244–245: Change the width and height of your input. If your input does not image like ours, you can find where they use it and comment it.
  • Line #248–249: Change this to the name of your input and output layer name.

Everything is ready to build our model now. It looks a lot of steps but it is really simple. You can add and modify your code according to the above steps in less than 10 minutes or so. So don’t get afraid if it is long.

Step #6

Modify BUILD file to compile which sources, if you already used the tensorflow’s code file names no need to change anything.

If you do exactly as I told above, now we are in the following path:
“<path to the tensorflow repo clone>/tensorflow/tensorflow/examples/ccifar”

Now open a terminal in the ccifar root folder(i.e. ‘examples’) and run the following code to build it:
bazel build ccifar/…

It will take a while and maybe produced a bunch of warnings. When finished, you will get new folders in your cloned tensorflow root folder as “bazel-*

Note: As I mentioned in Requirements section, if you tried to build the wheel yourself, you will install bazel and swig. I had a problem with bazel and it was: “First argument of load() is a path, not a label. It should start with a single slash if it is an absolute path.”. I found that it is due to changes in bazel. The WORKSPACE file is up-to-date with latest bazel but I was using bazel 0.1.3. After updating to latest one the problem fixed.

Step #7

Finished!

Now you get your executable file in the following path:

“tensorflow/bazel-bin/tensorflow/examples/ccifar”

If you run it, you will get this:

As you see the first prediction is an automobile and the image I provided to the model was:

One of CIFAR-10 examples. 32*32

You can also give additional arguments when running from the terminal by using: “ — image=<path to your image>

I trained my own CIFAR-10 for ridiculously few steps, as low as 1000 steps just for the tutorial purpose but you see it still predicts the correct label for the image. I post the complete project in the github. Actually at first, in the first 20 days of the release of tensorflow, I wanted to write a tutorial about it but I didn’t continue as it was so easy. It goes about 75% accuracy with 40–50K steps. If you have any questions about it ask me.

I hope this tutorial helps you to get your models works in the C++ easily. I tried to explain every situation I encounter during development and also writing this post. Let me know about any new problems, ways to do it, … in the comments.

If you enjoyed the tutorial, please let me know by inviting me to a cup of coffee here: www.buymeacoff.ee/hamedmp

--

--

Hamed MP
Hamed MP

Written by Hamed MP

Product Manager | Ex-AI MS.c. & Runner

Responses (30)