Tensorflow Sessions statically run Tensorflow Graphs.

Ouwen Huang
3 min readJul 27, 2018

--

Jupyter Notebook Here. (First post here)

If you have some Tensorflow Graph, whether you downloaded or created it from scratch using the tensorflow’s Graph API. You will need to use the Session API to run it.

We will take the following graph as a simple example. When we call sess.run() on tf.Tensor types.

g1.get_operation_by_name will simply return the tf.Operation object. We can get an array of the tf.Tensor objects that this operation produces with the .outputs attribute. Remember, it is an important distinction to make that tf.Tensor and tf.Operation are different! tf.Operation is a node tf.Tensoris an edge.

The way we run the graph above is actually very inefficient. It is running 6 times. You can see this by running a quick tf.Print . tf.Print is a graph node that outputs values and messages to stderr from the Tensorflow C++ Runtime.

Yikes

We only want to run the graph once and retrieve all outputs. We can easily tweak our code so that we only make one run call. We will also notice that the graph is run just once instead of 6 times.

We may want to add some more dynamic elements to our graph. This can be done with tf.Variable operations and tf.Placeholder operations. tf.Placeholder is a simple operation that takes in a value during the session runtime. Rather than having my_input be a constant we can instead use a placeholder.

tf.Variable is a bit more interesting… We will just create a graph with one variable to our graph.

When we inspect what is added to our graph, we notice that tf.Variable is actually a group of many different operations: tf.Identity , tf.Assign, tf.VariableV2 and more operations within the the Initializer. These exist to help tf.Variable store state.

Internals of a tf.Variable

When the variable is first willed into existence, it has no value. This is why when you begin a session you must first initialize your variables, and why there is an Initializer is attached to the tf.Variable class (in our example we use a random distribution. The snippet above can be run and sess.run(v) will print out a random number. However, try placing sess.run(v) before the variable is init, and you will receive an error.

When using the python API to work with variables, there is under the hood syntax sugar that exists so that you are able to use the variables as though they are normal tensor outputs from a normal operation. A big difference to note is that when we save the graph as a protobuf, the tf.Variable group is saved; however the value stored in the tf.Variable is lost. In order to save this we must utilize tf.train.Saver .

We perform some adds to our variable, then we run the saver. *Note the files produced are actually checkpoint, model.ckpt.data-…, model.ckpt.index, and model.ckpt.meta. When we call /tmp/model.ckpt later, these files will need to exist so don’t move them around.
We can run the saver to restore our session variable. Since the value is coming from our checkpoint file, no init is needed.

To put most simply, the .ckpt file is just a map of our variable names to the value that was stored during the session. If I created an entirely new graph, as long as I had the variable name v in my graph I would be able to restore the value in the checkpoint.

In production these .ckpt files can be run multiple times to snapshot the progress of a Tensorflow graph that is running in a session.

I hope this was a helpful short overview of the low level tensorflow API.

--

--