Tensorflow Operations

Kaustubh N
SomX Labs
Published in
7 min readJan 25, 2017

In tensorflow Constants, Variables & Operations are collectively called ops.

In the introduction post about tensorflow we saw how to write a basic program in tensorflow. Also about graphs, sessions and how they work. Now lets explore further and dig deep into details of tensorflow.

The first program that we wrote in the previous post goes like this:

import tensorflow as tfa = 3
b = 4
ops1 = tf.add(a, b)
ops2 = tf.mul(a, b)
ops3 = tf.sub((ops2, ops1)
with tf.Session() as sess:
sess.run(ops3)
# Output
5

TensorBoard

TensorBoard is a sweet little utility that separates tensorflow from other libraries and adds enormous amount of value to the library. Its a dashboard to visualize graphs/results easily.

Source: tensorflow.org

Lets see how to do it. Using tf.summary write the graph that we have created to a file.

writer = tf.summary.FileWriter('./graphs, sess.graph)

So now our program will look like this

import tensorflow as tfa = 3
b = 4
ops1 = tf.add(a, b)
ops2 = tf.mul(a, b)
ops3 = tf.sub(ops2, ops1)
with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs', sess.graph)
sess.run(ops3)
writer.close()

Once you execute the above code, it will generate a file inside graphs folder within current folder.

To visualize the graph saved inside the graphs folder lets use the tensorboard command line utility.

$ tensorboard --logdir="./graphs"# Once run successfully goto the browser and open http://localhost:6006/
TensorBoard Graph Visualized

The graph shows a pictorial representation of the code.

Constants in Tensorflow

In tensorflow we define variables as constants.

# Define a variable in tensorflow
a = tf.constant(4)

The below graph is generated for the code shown adjacent to it.

import tensorflow as tfa = tf.constant(5)
b = tf.constant(3)
x = tf.add(a, b)with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs', sess.graph)
sess.run(x)
writer.close()

The variable named in the graph are Const and Const_1 that is cumbersome for complicated graphs that might get complicated. To name variables we used the name argument as a = tf.constant(5, name="a")

Named variables a, b.

The operations can also be named x = tf.add(a,b, name="addition") .

A few more examples of constants

# Syntax to define a constant
tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)
# Define a vector
a = tf.constant([2, 5], name="vec_a")
# Define a matrix
b = tf.constant([[1, 2], [2, 3], [3, 4]], name="mat_b")
# Tensor initialized with 0stf.zeros(shape, dtype=tf.float32, name=None) # Syntaxc = tf.zeros([3, 4], dtype=tf.float32, name="z") # 3x4 Matrix with 0# Tensor like Syntaxtf.zeros_like(input_tensor, dtype=None, name=None, optimize=True)d = tf.zeros_like([[1, 2], [3, 4]]) ==> [[0, 0], [0, 0]]# Similarly syntax for 1stf.ones(shape, dtype=tf.float32, name=None) tf.ones_like(input_tensor, dtype=None, name=None, optimize=True)# More generalized tensor filled with any valuetf.fill([2, 3], 7) ==> [[7, 7, 7], [7, 7, 7]]

For more details about constants and range like functions please refer to tensorflow documentation. Link to the tensorflow docs for constants here.

Operations in Tensorflow

Tensorflow math operations are very similar to math operations in the python numpy library. Few examples are

# Define constants
a = tf.constant([3, 6])
b = tf.constant([2, 2])
# Addition
tf.add(a, b) # >> [5 8]
tf.add_n([a, b, b]) # >> [7 10]. Equivalent to a + b + b
# Multiplication
tf.mul(a, b) # >> [6 12] because mul is element wise
# Matrix multiplication
tf.matmul(a, b) # >> ValueError
tf.matmul(tf.reshape(a, [1, 2]), tf.reshape(b, [2, 1])) # >> [[18]]
# Division
tf.div(a, b) # >> [1 3] tf.mod(a, b) # >> [1 0]

Link to docs containing math operations here.

DataTypes in Tensorflow

Tensorflow accepts python native types such as string, int, float, boolean etc. If you feed a python native data type that is single value it will be converted to a scalar, list will be converted to a vector or 1-D Tensor, list of list will be converted to Matrix or 2-D vector and so on.

Lets see how to define different types of tensors using various data types.

Scalar or “0-D” Tensor

tensor_0 = 23
tf.zeros_like(tensor_0) # ==> 0
tf.ones_like(tensor_0) # ==> 1

Vector or “1-D” Tensor

tensor_1 = ['a', 'b', 'c']
tf.zeros_like(tensor_1) # ==> ["", "", ""]
tf.ones_like(tensor_1) # ==> Error

Matrix or “2-D” Tensor

tensor_2 = [[True, False], [False, True]]
tf.zeros_like(tensor_2) # ==> 2x2 tensor all elements are False
tf.ones_like(tensor_2) # ==> 2x2 tensor all elements are True

Like numpy has its own data types tensorflow has its own. Shown in table below.

Tensorflow data types. source: tensorflow.org

Tensorflow integrates with numpy seamlessly we can use numpy data types in tensorflow. But it is not advisable to do so. In near future this seamless integration might not be so seamless.

Then you would say why not use a tf.constant?

Well that is a good thought, but constants are defined in graph, so lets say if you have to define a constant that is huge then that might be a problem. The loading and debugging of graphs becomes slow.

Only use constants for primitive types. Use variables or readers for more data that requires more memory.

Variables in Tensorflow

A variable in a tensorflow is very much like a variable in any other programming construct. There is a very important difference between tf.constant and tf.Variable . Yeah right its tf.Variable and not tf.variable . That is because tf.constant is a operation and tf.Variable is a class.

Lets check out a few ways to define variables.

# create variable a with scalar value 
a = tf.Variable(8, name="scalar")
# create variable b as a vector
b = tf.Variable([3, 7], name="vector")
# create variable c as a 2x2 matrix
c = tf.Variable([[1, 2], [3, 4]], name="matrix")
# create variable W as 340x 20 tensor, filled with zeros
W = tf.Variable(tf.zeros([340,20]))

Since tf.Variable is a class it has various ops such as initializer, value, assign, assing_add…etc.

Variables need to be initialized before they can be used in a graph. graph.global_variables_initializer() is the easiest way to initialize all variables in a graph.

# Initialize all variablesinit = tf.global_variables_initializer() 
with tf.Session() as sess:
sess.run(init)
# Initialize only a subset of variables: init_ab = tf.variables_initializer([a, b], name="init_ab") with tf.Session() as sess:
sess.run(init_ab)
# Initialize a single variable W = tf.Variable(tf.zeros([340, 20]))
with tf.Session() as sess:
sess.run(W.initializer)

To evaluate a variable we can use the eval() operation of the tf.Variable class. W.eval()

W = tf.Variable(tf.zeros([340, 20])) 
with tf.Session() as sess:
sess.run(W.initializer)
print W.eval()
# Output
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]

To assign values to variables we can use the assign op with W.assign(value) .

W = tf.Variable(5) 
W.assign(10)
with tf.Session() as sess:
sess.run(W.initializer)
print W.eval() # >> 5

The above code doesn’t output 10 why? Because we created an op with assign and in tensorflow any op needs to be executed so as to take effect.

W = tf.Variable(5) 
assign_op = W.assign(10)
with tf.Session() as sess:
sess.run(assign_op)
print W.eval() # >> 10

Did you observe that we did not need to initialize a variable still we got the correct output, that is because the assign_op did initialization for us.

Here is something interesting that we can do with the assign op.

# create a variable whose original value is 2 
my_var = tf.Variable(2, name="my_var")
# assign a * 2 to a and call that op
a_times_two my_var_times_two = my_var.assign(2 * my_var)
with tf.Session() as sess:
sess.run(my_var.initializer)
sess.run(my_var_times_two) # >> 4
sess.run(my_var_times_two) # >> 8
sess.run(my_var_times_two) # >> 16

Calling assign op multiple times computes on the values that are computed in the prior execution.

Other ops such as assign_add and assign_sub work similarly my_var.assign_add(2) will add 2 to the value of the variable my_var, the important difference between assign and assign_add/assign_sub is that the latter does not initialize variables for you.

Since sessions execute graphs in tensorflow each session will maintain a graph and its values separately. The operations performed in session1 will not affect the values of graph stored in session2. E.g.

W = tf.Variable(20) sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(W.initializer)
sess2.run(W.initializer)
print sess1.run(W.assign_add(10)) # >> 30
print sess2.run(W.assign_sub(2)) # >> 18
print sess1.run(W.assign_add(100)) # >> 130 print sess2.run(W.assign_sub(50)) # >> -32 sess1.close()
sess2.close()

Session vs. InteractiveSession

An interactive session is one in which you do not have to specify the sess every time, its very much like the python interactive interpreter.

sess = tf.InteractiveSession()
a = tf.constant(3)
b = tf.constant(4)
c = tf.mul(a, b)# Use 'c.eval()' without specifying the context 'sess'

print(c.eval()) # ==> 12
sess.close()

That concludes our todays post. This one is rather long, going forward will make sure to keep the posts short and sweet. Till then keep learning.

To get in touch say hello on twitter @kaustubhn.

--

--

Kaustubh N
SomX Labs

Tinkerer, Machine Learning, Technology and Passion. Figuring out life!