Tensorflow Graphs are just protobufs.
Jupyter Notebook Here, (First post here).
In a Tensorflow computational graph, nodes are tf.Operation
objects and edges pass tf.Tensor
objects. A tf.Operation
then takes tf.Tensor
objects as inputs and outputs.
Let’s create the computational graph (figure 1) using the mathematical tf.Operation
objects provided here. Important note: we are only building the graph, not evaluating. This is equivalent to typing numbers into the calculator without pressing enter.
Each of the operations above can take in a tensor-like object (scalar, list, numpy array). tf.Tensor
should feel very similar to python numpy.
- They have similar data types which must be conformed to. (
tf.DType
: e.g.tf.float16
,tf.string
,tf.variant
) - They allow for n-dimensional shapes: 0-dimensional scalar, 1-dimensional vector, 2-dimensional matrix, etc.
- They have a similar matrix slice notation.
However, tf.Tensor
objects are only symbolic handles, and do not actually provide a concrete value until evaluated. It is a redeemable prize ticket, not the prize itself.
We will use tensorboard, a graph visualization tool to view our built graph.
We have now defined a graph which can be saved in a protocol buffer text format. Protocol buffers are just like any other structure notation such as JSON, XML, HTML. Protobuf is a preferred choice because they are very compact, and are strongly typed. Tensorflow graphs use the protobuf GraphDef
to define a graph. This GraphDef
protobuf can be exported and imported.
We can view graph g1
by running g1.as_graph_def()
this will output the text form of the graph, which we view below.
This protobuf file contains everything needed to reconstruct a tensorflow graph. You can load in the graph_protobuf.pbtxt
to retrieve the program. Changing the internals of this file is analogous to programming a new graph program.
So far we have only been admiring our graph program, but we have not actually run it yet. We will do so with the tensorflow Session API here.