A beginner introduction to TensorFlow (part-2)

Narasimha Prasanna HN
Buzy Coders Camp
Published in
3 min readJun 10, 2018

In the last part I wrote about some of the core theoretical concepts which are very important to build Machine Learning models using Tensorflow.

The core components of Tensorflow are Tensors and Computation graphs (or Data flow graphs) . Tensorflow is just a framework for expressing operations as computation graphs. Tensorflow then divides the graph into many Subgraphs which are independent of each other and these Subgraphs are executed in parallel, of course this is one of the major feature of TensorFlow which contributed a lot to its large scale adaption. Tensorflow has another important feature , it provides a wide range of ready-made mathematical tools which can be used to solve various problems. These mathematical tools are also computation graphs, once they are added into your program they are treated as Subgraphs because they become part of the computation graph you are going to build.

What happens when you execute a program using Tensorflow?

The above figure shows the simplest possible architecture of TensorFlow. The top level represents programming language interfaces , Tensorflow mainly supports C++ and Python, (of course many language bindings are available, but not efficient as C++ and python).

Language bindings: Language bindings provides an interface for building graphs using the language you are familiar with, note that this is just a layer which is just going to help you to build graphs.

Compound graphs : It is very important to understand the meaning of compound graphs, Compound graphs are combination of many Subgraphs, graphs built at first layer are always compound graphs because it is a combination of many ready made Subgraphs or operations made available by Tensorflow, for example if you are building a graph for expression :

e = a*b + c + (a/b)

Then e is a compound graph because it is a combination of 2 Subgraphs a*b and (a/b). (In Tensorflow terms, an expression is nothing but a Computation graph).

Core Tensorflow execution system:

Once you build and compile a Tensorflow program, the language binding will invoke the Tensorflow library which contains core execution system. The data sent by binding to Core execution system is in the form of a container. This container is also called as Session in Tensorflow. Therefore you have to create a session object which encapsultes all the operations and Tensors. The task of core execution engine is simple :

  • Obtain the session object and build a Computation graph for it.
  • Identify Subgraphs and their inputs.
  • Determine whether the Subgraph is a pre-built Tensorflow operation or user made operation.
  • Initialize the distributed environment by spawning master and worker processes.
  • Encapsulte Subgraphs and send them to individual worker processes. Master process monitors all workers
  • Interact with Tensorflow kernels to perform mathematical operations.

Tensorflow ops : Tensorflow ops are set of ready-made mathematical operations which are built using C and C++. They are Subgraphs by nature and can be used in our programs. Operations supported by Tensorflow ops are vast, there are many operations of various complexities ranging form simple addition , multiplication etc to Neural network activation functions, Gradient descent operations , Loss ops etc. This huge collection makes Tensorflow not only a distributed execution engine but also a Mathematical engine for building scientific applications. (Tensorflow also makes use of Numpy).

Kernels : The core definition of Tensorflow goes like this : “ Tensorflow is a library for Machine Learning and mathematical computing on heterogeneous platforms.” Here heterogeneous platforms means a large variety of devices with different configurations and different capabilites with different memory constraints. Tensorflow can run any platforms because of it’s sophisticated kernels. These kernels are built for each platform and can be used by upper layers to interact with heterogeneous hardware devices. Kernels can make use of some features of on device CPUs like XLA, AVX etc to speed up mathematical operations.

--

--

Narasimha Prasanna HN
Buzy Coders Camp

Web and Android developer, Hobbyist programmer ( comfortable with Python and JavaScript), Interested in making Machines intelligent.