Learn Everything About Tensorflow

Shweta Pawar
VisionNLP
Published in
7 min readFeb 6, 2019

Let’s learn Tensorflow….!!!

TensorFlow provides a variety of different tool kits that allow you to write code in what language you want. For instance, we see code in python. You can also define the architecture on which your code should run (CPU, GPU, etc.).
Tensorflow allows you to write numerical programs to solve mathematical operations and equations.

Tensorflow Introduction:
Tensorflow is an awesome project for Google. We can do large and complex computations using Tensorflow. We know that for basic computations popular library is NumPy, But using TensorFlow we can do very high-level computations and its computation time is very optimized.
Most popular implementations or projects of deep learning are speech recognition, Computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery.

  1. What is Tensor?
    The basic data type in this framework is a Tensor. A Tensor is an N-dimensional array of data.

A tensor is a vector or matrix of n-dimensions that represents all types of data.
The shape of the data is the dimensionality of the matrix or array.

2. Representation of a Tensor
In TensorFlow, a tensor is a collection of feature vectors (i.e., array) of n-dimensions.

3. Types of Tensor

4. Operations

TensorFlow involves the manipulation of a tensor. There are four main tensors you can create:
tf.Variable, tf.constant, tf.placeholder and tf.SparseTensor.
You can visualise a tree diagram.

Very interesting gif for visualizing Tensorflow.

Now we are going to explain each and every node of Tensorflow.

1. Variables

The best way to create a variable is by calling the tf.get_variable function. This function requires you to specify the Variable’s name.
Variables are manipulated via the tf.Variable class. A tf.Variable represents a tensor whose value can be changed by running ops on it .atf.Variable exists outside the context of a single session.run call.

import tensorflow as tf

var = tf.Variable(0) #our 1st variable in the "global_variable" set
add_operation = tf.add(var, 1)
update_operation = tf.assign(var, add_operation)

with tf.Session() as sess:
#once define variables,you have to initialize them by doing this
sess.run(tf.global_variables_initializer())
for _ in range(3):
sess.run(update_operation)
print(sess.run(var))
# Output
1
2
3

If a high-level framework like tf.Estimator or Keras are being used, the variables will be automatically be initialized for you. To initialize the variables, the tf.global_variables_initializer needs to be called. You can initialize all the variables in a session using the following line of code.

session.run(tf.global_variables_initializer())

2. Constant

A constant is a tensor whose value cannot be changed at all.

tf.constant(  
value,
dtype=None,
shape=None,
name='Const',
verify_shape=False
)
  • Creates a constant tensor.
  • The resulting tensor is populated with values of type dtype, as specified by arguments value and (optionally) shape.
  • The argument value can be a constant value or a list of values of type dtype.
  • The argument shape is optional. If present, it *specifies the dimensions of the resulting tensor. If not present, the shape value is used.
  • If the argument dtype is not specified, then the type is inferred from the type of value.
# Constant 1-D Tensor populated with value list.
tensor_1 = tf.constant([1, 2, 3, 4, 5, 6, 7])
print(tensor_1.shape)
# Constant 2-D tensor populated with scalar value -1.
tensor_2 = tf.constant(-1.0, shape=[2, 3])
print(tensor_2.shape)
# output
(7,)
(2 , 3)

3. Placeholder

tf.placeholder(
dtype,
shape=None,
name=None
)
  • A placeholder has the purpose of feeding the tensor. Placeholder is used to initialize the data to flow inside the tensors. To supply a placeholder, you need to use the method feed_dict.
  • The placeholder will be fed only within a session.
  • In the next example, you will see how to create a placeholder with the method tf.placeholder. In the next session, you will learn to feed a placeholder with actual value.
import tensorflow as tf

tf.reset_default_graph() # To clear the default graph
# Inserts a placeholder for a tensor that will be always fed.
x1 = tf.placeholder(dtype=tf.float32, shape=None)
# Inserts a placeholder for a tensor that will be always fed.
y1 = tf.placeholder(dtype=tf.float32, shape=None)

z1 = x1 + y1 # Summation `x1` and `y1`
x2 = tf.placeholder(dtype=tf.float32, shape=[2, 1])
y2 = tf.placeholder(dtype=tf.float32, shape=[1, 2])
# Multiplies matrix `x2` by matrix `y2`, producing `x2` * `y2`
z2 = tf.matmul(x2, y2)

Right now we just create the empty spaces and in future we are going to use these placeholders o feed the values. Let’s see.

with tf.Session() as sess:
# when only one operation to run
# Runs operations and evaluates tensors in `fetches`.
z1_value = sess.run(z1, feed_dict={x1: 1, y1: 2})

# when run multiple operations
z1_value, z2_value = sess.run(
[z1, z2], # run them together
feed_dict={ # Define all variable value in placeholder
x1: 1, y1: 2,
x2: [[2], [2]], y2: [[3, 3]]
})
print(z1_value) # print value of z1
print(z2_value) # print value of z2
#ouptput
3.0
[[6. 6.]
[6. 6.]]

4. Session

It is the main class/platform where we run our TensorFlow operations.

TensorFlow works around 3 main components:

4.1 Graph: The graph is fundamental in TensorFlow. All of the mathematical operations (ops) are performed inside a graph. You can imagine a graph as a project where every operation is done. The nodes represent these ops, they can absorb or create new tensors.
4.2 Tensor: A tensor represents the data that progress between operations. You saw previously how to initialize a tensor. The difference between a constant and a variable is the initial values of a variable will change over time.
4.3 Session: A session will execute the operation from the graph. To feed the graph with the values of a tensor, you need to open a session. Inside a session, you must run an operator to create an output.

There are two different methods that are used to run the session:
See results without session:

import tensorflow as tf

m1 = tf.constant([[2, 2]])
m2 = tf.constant([[3], [3]])
dot_operation = tf.matmul(m1, m2)

print(dot_operation) # wrong! no result
# output
Tensor("MatMul_1:0", shape=(1, 1), dtype=int32)

Method 1:

# method1 use session
sess = tf.Session()
result = sess.run(dot_operation)
print(result)
sess.close()
#output
[[12]]

Method 2:

# method2 use session
with tf.Session() as sess:
result_ = sess.run(dot_operation)
print(result_)
#output
[[12]]

5. Graph

  • TensorFlow depends on a genius approach to render the operation. All the computations are represented with a dataflow scheme. The dataflow graph has been developed to see data dependencies between individual operations. Mathematical formulas or algorithms are made of a number of successive operations. A graph is a convenient way to visualize how the computations are coordinated.
  • The graph shows a node and an edge. The node is the representation of an operation, i.e. the unit of computation. The edge is the tensor, it can produce a new tensor or consume the input data. It depends on the dependencies between individual operations.
  • The structure of the graph connects together the operations (i.e. the nodes) and how those are operations are feed. Note that the graph does not display the output of the operations, it only helps to visualize the connection between individual operations.

Let’s see an example.

Imagine you want to evaluate the following function:

TensorFlow will create a graph to execute the function. The graph looks like this:

tf.reset_default_graph()  # To clear the default graph x = tf.get_variable("x", dtype=tf.int32, initializer=tf.constant([5]))z = tf.get_variable("z", dtype=tf.int32,  initializer=tf.constant([6]))c = tf.constant([5], name =	"constant")square = tf.constant([2], name ="square")f = tf.multiply(x, z) + tf.pow(x, square) + z + c

Session:

init = tf.global_variables_initializer() # prepare to initialize all variables
with tf.Session() as sess:
init.run() # Initialize x and y
function_result = f.eval()
print(function_result)
# output
[[66]]

Some Examples of Tensorflow coding:

Addition and Subtraction

import tensorflow as tfa1 = tf.constant([1, 2, 3])
a2 = tf.constant([3, 4, 5])
a3 = a1 + a2
# (tf.add(a1, a2) also works.
# TensorFlow supports primitive operators)
with tf.Session() as session:
session.run(a3)

# Result
[4, 6, 8]
a4 = a1 - a2
with tf.Session() as session:
session.run(a4)

# Result
[-2, -2, -2]

Multiplication and Division

import tensorflow as tf
a1 = tf.constant([1, 2, 3])
a2 = tf.constant([3, 4, 5])
a3 = a1 * a2
# (tf.mul(a1, a2) also works.
# TensorFlow supports primitive operators)
with tf.Session() as session:
session.run(a3)

# Result
[3, 8, 15]
a4 = a1/a2
with tf.Session() as session:
a4.eval()
# eval() can be used instead of session.run() to compute the results #of a particular variable.

# Result
[0.34, 0.5, 0.6]

Conclusion

Mostly TensorFlow is used as a backend framework whose modules are called through Keras API and tflearn.
TensorFlow is used to solve complex problems like Image Classification, Object Recognition, Speech to Text, Text to Speech, etc.
In this article, we have learned about the structure and components of TensorFlow.
In the next article, we shall dive into Machine Learning and build our first Image Classification model using TensorFlow.

A few resources to learn about TensorFlow in-depth:

  1. TensorFlow documentation.

--

--

Shweta Pawar
VisionNLP

NLP, Speech, LLMs & GenAI Expert | Lead DS @ HSBC | Top-Rated Freelancer | Helping Freshers LinkedIn:@aishweta