Placeholders in tensorflow

Placeholders in Tensorflow

Kaustubh N
SomX Labs
Published in
3 min readJan 30, 2017

--

To quickly recap how a tensorflow program executes.

  1. Create a graph (A.K.A Assembling a graph)
  2. Use session to execute a graph

Lets say you have to assemble a graph for an awesome concept that you have thought of, but sadly you don’t know what the values should be at this point or it would be too much to think about the values at this point of time. What would you do ? Well tensorflow provides ‘Placeholders’ for this exact scenario.

With placeholders we can assemble a graph without prior knowledge of the graph. Also the users of the program can later provide their own data during execution.

Placeholders

Syntax: tf.placeholder(dtype, shape=None, name=None)

A few examples:

a = tf.placeholder(tf.float32, shape=[3]) # Placeholder vector of 3 elements with dtype float 32b = tf.constant([1, 2, 3], tf.float32) # Constant vector of 3 elements with dtype float 32c = tf.add(a,b)with tf.Session() as sess:
sess.run(c) # Throws and error because a does not have actual values assigned

How to assign values to placeholders?

One way to assign a value to a placeholder is using dictionary.

sess.run(c, {a:[1,2,4]}) Changing the line in above code will output >>>[2,4,7] .

shape=None in the syntax means that the placeholder can accept a tensor of any shape for the placeholder. It is a easy way to use placeholders but it is not advisable since it is tough to debug the code with shape=None.

Placeholder are valid ops so that we can use them in ops such as add, sub etc.

Lazy loading of variables in Tensorflow

Tensorflow adopts a technique for deferring the initialization of a variable until it is needed for computation, that I am calling here as lazy loading, do not confuse with any other concept. It helps in keeping the memory footprint low and also efficient loading of variables.

Normal loading of variables in an example

x = tf.Variable(10, name='x')
y = tf.Variable(20, name='y')
z = tf.add(x, y) # you create the node for add node before executing the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter('./my_graph/l2', sess.graph)
for _ in range(10):
sess.run(z)
print sess.run(z)
writer.close()
# Output
30

Deferred (Lazy) loading of variables in an example

x = tf.Variable(10, name='x')
y = tf.Variable(20, name='y')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter('./my_graph/l2', sess.graph)
for _ in range(10):
sess.run(tf.add(x, y)) # Trying to save a line of code
print sess.run(tf.add(x,y))
writer.close()
# Output
30

The output in both the cases is 30, but the way the variables are initialized is very different.

In the case of the deferred loading the node Add is added multiple times to a graph and that can be a problem while reading the graph, imagine if you want to build a graph that has thousands of operations, this can get your graph very messy. The trick is to separate the operations from the data.

I promised to keep my posts short and sweet. So i will stop at that.

To get in touch tweet me at @kaustubhn

--

--

Kaustubh N
SomX Labs

Tinkerer, Machine Learning, Technology and Passion. Figuring out life!