Fundamentals of TensorFlow (Low Level)

Odemakinde Elisha
7 min readApr 16, 2020
https://github.com/tensorflow/tensorflow

This series of tutorial makes you understand the fundamental applications of TensorFlow, how you can build your own neural network using TensorFlow. You can download or have access to the codes for this series here on my GitHub page. This series of tutorial is broken down into the following parts:

  1. Introduction to TensorFlow (TensorFlow fundamentals).
  2. Optimizer, losses and activation function in TensorFlow (how to make your choice).
  3. Low level TensorFlow for regression problems.
  4. Low-level TensorFlow for Classification problems.

I would strongly advise you follow the series sequentially so as not to miss any part; with this you would be cool in written any fully connected neural net to solve a machine learning problem. This article is focused on topic 1. Let’s get started to knowing what TensorFlow is.

Definition

TensorFlow is an end to end open source machine learning platform for everyone. It gives developers the flexibility of data flow and differentiable programming. More importantly, its a symbolic math library and its used for machine learning applications such as neural networks, deep learning. It can be used for research and production. It runs on multiple CPU and GPU. TensorFlow was developed by the google brain team, the first version was released on the 11th of February, 2017. Its an open source library written in python, c++ and CUDA. It is a cross-platform application, that is, it is available on Linux, mac-OS and windows. For more about TensorFlow, do check out their website here.

TensorFlow was coined from the word Tensors. A tensor is an n- dimensional array. For example a matrix is a 2 dimensional tensor or 2 dimensional array. All TensorFlow programs involves basic manipulations of tensors (tf.Tensors).
A tf.Tensor object represents a partially defined computation that will always produce or operate on discrete/ continuous values. Basic operations of TensorFlow we will be looking at includes: tf.constant, tf.Variable, tf.add, tf.matmul, tf.Placeholder, tf.global_variables_initializer(), tf.ones, tf.zeros, tf.Session, tf.matrix_determinant, tf.matrix_inverse, tf.matrix_diag, tf.matrix_transpose, tf.argmax, tf.argmin, tf.argsort, tf.random_uniform, tf.ceil, tf.dtype.cast.

First and foremost, I would advise you look here on how to get TensorFlow running on your personal computer. Also, I would want you all to know that you can always get more on their documentations here.

Constants, sessions and global_variables_initializers

Just as we have in discrete python, where we have for example, c = 5. We will say, ‘c’ is a variable that contains an integer of value 5. We can also say its a constant. In TensorFlow, here is how you define your constants.

constant.py

In the above gist, code line 1- 9 was all about importing libraries needed for this. You need to take note of line 4 and 5. We are importing this way because we want to go low level; every version of TensorFlow has support for version 1.x. Line 5 helps to disable all forms of eager execution that might want to prevent us from having this low-level feel. Line 11, is the simplest way of defining a TensorFlow variable. In this case, x is assigned 34, that is, x is a tensor variable that contains integer 34. Another thing you need to pay close attention to is, name= ‘x’. I have that in there because, every tensor declared gets to appear on the tensor graph (when you want to visualize it). On visualizing, if you didn’t give it a name when declaring it, TensorFlow is going to give it any random name; we are therefore setting it to ‘x’, so that on tensor board, when we see ‘x’ we know that, this is a variable that contains integer 34.

So in line 14 we want to compute z= x**3 + yx**2 + yz. So we have x, y and z as tensor variables. In order to compute, we always have to run tensors in sessions. TensorFlow sessions can simply be defined as a means of executing TensorFlow graphs, which in this case we will call TensorFlow computing of variables. Line 16 shows us how to declare TensorFlow sessions.

For you to have an output result of your TensorFlow graph, you need to run it in a session by calling the eval method attached to that tensor. More so, we will get to see how the eval method works latter in this article. This is because, it’s more than just calling it, it also takes in information.

The tf.global_variables_initializer is an initializing function, that initializes all un-static values that needs to be initialized for a successful run of our graphs. We actually don’t have any un-static value in our function because the values are fixed variables in our tensor graph; but to avoid any form of error, always have it in your sessions. By the time you run the above graphic tensor, you will have your result to be, 53716.

Constants

constants

So just as we have for variables, we also have the same style in constants. Line 1, shows us how to declare a constant. So to compute a3, you have to run that in a session, and you have 7 as the output result. Let’s move on to how to multiply matrices.

Matrix Multiplication

matrix multiplication

So a matrix is a 2 dimensional tensor. Just to illustrate that you can also perform matrix multiplication so we have this gist above showing us how to do it. Line 1 tells us we have a variable ‘m’ which is a tensor array of one’s. This tensor array has 20 rows, 5 columns. The same thing goes to line 2, we have ’n’ to be a tensor array of zeros. This tensor array has 5 rows, 20 columns. We have ‘k’ to be a tensor array of ones with 5 rows and 5 columns. So we want to multiply tensor array ’n’ and ‘m’, then the result gets added to tensor array ‘k’. Matrix rules says, for us to multiply to matrices together, the columns of the first has to be the same as the number of rows of the second matrix (we satisfied that). So when we multiply that by using the matrix multiplication function in TensorFlow called tf.matmul, we have our result to be an array of 5 columns and 5 rows, which also satisfies matrix addition rule to tensor array k. So we have, our answer to remain 5 by 5 array of one’s. When you multiply 1 by 0, you get 0, and when you add it to 1, then you have 1 (that’s what just happened). Definitely, you need to run this in a TensorFlow session for you to have results.

Matrix determinant, inverse, diagonal, transpose

determinant, inverse, diagonal and transpose

Just as we have been able to see basic operations on tensor arrays (matrix), you can actually do more like finding the determinant, inverse, diagonal vector, transpose of various matrices. All you need to do is to declare them as a variable, just as shown in the gist above, pass in the array / tensor you want to transform or compute into the function you wanna use and boom, you have your answer. For example, you can find the inverse of a matrix arr, by feeding in arr into tf.matrix_inverse, and the output variable gets to be a matrix inverse. Always ensure you evaluate every of your computations in TensorFlow sessions.

Argmax and argmin

argmax and argmin

The argmax function is an important function in TensorFlow that can actually help, for instance, in finding the maximum argument in an array. It most often used when you have a 2 dimensional tensor array and you want to transform into a 1 dimensional array. The code above shows how to do this.

Confusion_matrix, random_uniform and round

The confusion matrix is a tabular matrix that helps us find the true positive (tp), false positive (fp), false negative(fn) and true negative(tn). This key parameters of a confusion matrix can then be used to calculating accuracy, recall, precision and other form of evaluation metric. The code below shows how to use the tf.random_uniform (to randomly select 90 values between 0 and 1, into a 1 dimensional tensor array). This was done for tensor variable a and b. The tf.random_uniform returns float point numbers between 0 and 1. But with the use of tf.round helps to round each entry in the array to the nearest whole number. So by the time we pass in the 2 arrays, y_true and y_pred, then we have a confusion matrix showing us fn, fp, tn, tp.

confusion matrix

Placeholders

Placeholders can be defined as a variable in TensorFlow that we keep re-assigning it with numbers, values, arrays or information. So a placeholder can just be seen or likened to a pipeline that ensures all input values are satisfied for us to have the desired output result. The code below shows how to define a placeholder. In the code below, A is a placeholder that only takes in variables of type float32 and the dimension of any variable passed to it must have 3 columns before the information can be processed in a TensorFlow session. As you can see below, to evaluate B, information has to be fed, and conditions for A has to be satisfied.

Every defined value that needs to be evaluated needs to run in a TensorFlow session. For placeholders, information has to be passed into the TensorFlow variable that needs a variable which is a placeholder. The information to be processed by the placeholder variable has to be passed into the feed_dict argument as a dictionary. In the dictionary, the placeholder tensor name has to be assigned the value to be processed; just as we can see in the code below. On evaluating B with respect to the input fed to the placeholder, b_val becomes [5,6,7] and b_val_u becomes [[9,10,11],[12,13,14]]. Take note that, we will be using placeholders a lot in this series.

placeholders

I want to believe you have been able to learn a lot. Kindly check out my next post on optimizer, activation functions and losses. Also we will be furthering this into building a neural network using low-level TensorFlow in this series. Do share with friends and give lots of claps.

--

--

Odemakinde Elisha

Powering the next generation of AI solutions in the African Ecosystem.