Image for post
Image for post

Getting Started with
TensorFlow

The mathematical concept of a tensor could be broadly explained in the following way: If a scalar has the lowest dimensionality and is followed by a vector and then by a matrix. A tensor would be the next object in the line. Scalar, vectors and matrices are all tensors of rank 0, 1 and 2 respectively. Tensors are simply a generalization of the concepts we have seen so far.

How TensorFlow Works

At first, computation in TensorFlow may seem needlessly complicated. But there is a reason for it: because of how TensorFlow treats computation, developing more complicated algorithms is relatively easy. We will look into the pseudocode of a TensorFlow algorithm.

How to do it…

we will introduce the general flow of TensorFlow algorithms.

  1. Transform and normalize data: Normally, input datasets do not come in the shape TensorFlow would expect so we need to transform TensorFlow them to the accepted shape. We will have to transform our data before we can use it. Most algorithms also expect normalized data and we will do this here as well. TensorFlow has built-in functions that can normalize the data for you as follows:
    data = tf.nn.batch_norm_with_global_normalization(…)
  2. Partition datasets into train, test, and validation set: We generally want to test our algorithms on different sets that we have trained on.
  3. Set algorithm parameters (hyperparameters): Our algorithms usually have a set of parameters that we hold constant throughout the procedure. For example, this can be the number of iterations, the learning rate. It is considered good form to initialize these together so the reader or user can easily find them, as follows:
    learning_rate = 0.01
    batch_size = 100
    iterations = 1000
  4. Initialize variables and placeholders: TensorFlow depends on knowing what it can and cannot modify. TensorFlow will modify/adjust the variables and weight/bias during optimization to minimize a loss function. To accomplish this, we feed in data through placeholders. We need to initialize both of these variables and placeholders with size and type so that TensorFlow knows what to expect. TensorFlow also needs to know the type of data to expect.

Declaring Tensors

Tensors are the primary data structure that TensorFlow uses to operate on the computational graph. We can declare these tensors as variables and or feed them in as placeholders. First, we must know how to create tensors.

How to do it…

Here we will cover the main ways to create tensors in TensorFlow:

  1. Tensors of similar shape: We can also initialize variables based on the shape of other tensors, as follows:
    zeros_similar = tf.zeros_like(constant_tsr)
    ones_similar = tf.ones_like(constant_tsr)
  1. Note that this random uniform distribution draws from the interval that
    includes the minval but not the maxval ( minval <= x < maxval ).
  2. To get a tensor with random draws from a normal distribution, as follows:
    randnorm_tsr = tf.random_normal([row_dim, col_dim],
    mean=0.0, stddev=1.0)
  3. There are also times when we wish to generate normal random values that
    are assured within certain bounds. The truncated_normal() function always picks normal values within two standard deviations of the specified
    mean. See the following:
    runcnorm_tsr = tf.truncated_normal([row_dim, col_dim], mean=0.0, stddev=1.0)
  4. We might also be interested in randomizing entries of arrays. To accomplish this, there are two functions that help us: random_shuffle() and random_crop() . See the following:
    shuffled_output = tf.random_shuffle(input_tensor)
    cropped_output = tf.random_crop(input_tensor, crop_size)
  5. We mibght be interested in randomly cropping an image of size (height, width, 3) where there are three color spectrums. To fix a dimension in the cropped_output , you must give it the maximum size in that dimension:
    cropped_image = tf.random_crop(my_image, [height/2, width/2,3])

Using Placeholders and Variables

Placeholders and variables are key tools for using computational graphs in TensorFlow. We must understand the difference and when to best use them to our advantage.

How to do it…

The main way to create a variable is by using the Variable() function, which takes a tensor as an input and outputs a variable. This is the declaration and we still need to initialize the variable. Initializing is what puts the variable with the corresponding methods on the computational graph. Here is an example of creating and initializing a variable:
my_var = tf.Variable(tf.zeros([2,3]))
sess = tf.Session()
initialize_op = tf.global_variables_initializer ()
sess.run(initialize_op)

  1. Placeholders get data from a feed_dict argument in the session. To put a placeholder in the graph, we must perform at least one operation on the placeholder.
  2. We initialize the graph, declare x to be a placeholder, and define y as the identity operation on x, which just returns x.
  3. We then create data to feed into the x placeholder and run the identity operation. It is worth noting that TensorFlow will not return a self-referenced placeholder in the feed dictionary. The code is shown here and the resulting graph is shown in the next section.
sess = tf.Session()
x = tf.placeholder(tf.float32, shape=[2,2])
y = tf.identity(x)
x_vals = np.random.rand(2,2)
sess.run(y, feed_dict={x: x_vals})
# Note that sess.run(x, feed_dict={x: x_vals}) will result in a self-
referencing error.

How it works…

The computational graph of initializing a variable as a tensor of zeros is shown in the following figure:

Image for post
Image for post
Image for post
Image for post
  1. TensorFlow must be informed about when it can initialize the variables.
  2. While each variable has an initializer method, the most common way to do
    this is to use the helper function, which is global_variables_initializer(). This function creates an operation in the graph that initializes all the variables we have created, as follows:
    initializer_op = tf.global_variables_initializer ()
  3. But if we want to initialize a variable based on the results of initializing another variable, we have to initialize variables in the order we want, as follows:
    sess = tf.Session()
    first_var = tf.Variable(tf.zeros([2,3]))
    sess.run(first_var.initializer)
    second_var = tf.Variable(tf.zeros_like(first_var))
    # Depends on first_var
    sess.run(second_var.initializer)

Working with Matrices

Understanding how TensorFlow works with matrices is very important to understanding the flow of data through computational graphs.

How to do it…

  1. Creating matrices: We can create two-dimensional matrices from NumPy arrays or nested lists, as we described in the earlier section on tensors. We can also use the tensor creation functions and specify a two-dimensional shape for functions such as zeros() , ones() , truncated_normal() , and so on. TensorFlow also allows us to create a diagonal matrix from a one-dimensional array or list with the function diag(), as follows:

Inverse:
print(sess.run(tf.matrix_inverse(D)))
[[-0.5 -0.5 -0.5]
[ 0.15789474 0.05263158 0.21052632]
[ 0.39473684 0.13157895 0.02631579]]

Note that the inverse method is based on the Cholesky decomposition if the matrix is symmetric positive definite or the LU decomposition otherwise.

How it works…

TensorFlow provides all the tools for us to get started with numerical computations and adding such computations to our graphs. This notation might seem quite heavy for simple matrix operations. Remember that we are adding these operations to the graph and telling TensorFlow what tensors to run through those operations.

Implementing Activation Functions

When we start to use neural networks, we will use activation functions regularly because activation functions are a mandatory part of any neural network. The goal of the activation function is to adjust weight and bias. In TensorFlow, activation functions are non-linear operations that act on tensors. They are functions that operate in a similar way to the previous mathematical operations. Activation functions serve many purposes, but a few main concepts is that they introduce a non-linearity into the graph while normalizing the outputs. Start a TensorFlow graph with the following commands:

How to do it…

The activation functions live in the neural network (nn) library in TensorFlow. Besides using built-in activation functions, we can also design our own using TensorFlow operations. We can import the predefined activation functions ( import tensorflow.nn as nn ) or be explicit and write .nn in our function calls. Here, we choose to be explicit with each function call:

  1. There will be times when we wish to cap the linearly increasing part of the preceding ReLU activation function. We can do this by nesting the max(0,x) function into a min() function. The implementation that TensorFlow has is called the ReLU6 function. This is defined as min(max(0,x),6) .

How it works…

These activation functions are the way that we introduce nonlinearities in neural networks or other computational graphs in the future. It is important to note where in our network we are using activation functions. If the activation function has a range between 0 and 1 ( sigmoid ), then the computational graph can only output values between 0 and 1.

Summary

We covered

  1. Declaring Variables and Tensors
  2. Using Placeholders and Variables
  3. Working with Matrices
  4. Implementing Activation Functions

Written by

Data Scientist at NCS-IT, UK

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store