Dynamic Computation Graphs(DCG) with Tensorflow Fold!!

Google has introduced new tool under tensorflow umbrella i.e TensorFlow Fold.

If you are familiar with the deep learning libraries such as tensorflow, chainer, theano, caffee and many more. Everyone has their unique approach to building the graph-based computation. But some How almost all machine learning/deep learning frameworks operate on static computation graphs and can’t handle dynamic computation graphs. (pytorch, Dynet and Chainer are exceptions).

Tensorflow fold is based on deep Learning with Dynamic Computation Graphs. What an idea!!!

Ref from research.googleblog.com

Why tensorflow fold?

As we already have one beautiful tool suite case tensorflow which is addressing some cool problem. But it has some limitation in terms of dynamic graph computation. Tensorflow uses static graph computation. Batch processing of dynamic graphs is a very common technique for a variety of applications, such as computer vision and natural language processing. However, due to the varieties of type and shapes between distinct data, batch processing with a static graph over such data set is almost impossible with a current tensorflow framework.

Tensorflow fold is not another deep-learning framework. This is the extension to tensorflow that provides a tensorFlow implementation of the dynamic batching algorithm. Dynamic batching is an execution strategy for dynamic computation graphs.

Computations over data-flow graphs is a popular trend for deep learning with neural networks, especially in the field of cheminformatics and understanding natural language. In most frameworks, such as TensorFlow, the graphs are static, which means the batch processing is only available for a set of data with the same type and shape. However, in most original data sets, each data has its own type or shape, which leads to a problem because the neural networks cannot batch these data with a static graph.

To overcome above problem tensorflow fold has introduced.

Getting started!!!

Fold runs under linux; Python 2.7 and Python3.3 are recommended. Install either using virtualenv or pip.

Please note that Fold requires TensorFlow 1.0; it is not compatible with earlier versions due to breaking API changes.

First install Python, pip, and Virtualenv:

sudo apt-get install python-pip python-dev python-virtualenv
#create virtualenv
virtualenv foo # for Python 2.7
virtualenv -p python3 foo # for Python 3.3+
#Activate environment
source ./foo/bin/activate # if using bash
source ./foo/bin/activate.csh # if using csh
#  Install the pip package for TensorFlow. For Python 2.7 CPU-only, this will be:
pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc0-cp27-none-linux_x86_64.whl
#For Python 3.3+ and/or GPU, see here for the full list of available TF binaries.
#Check that TensorFlow can load:
python -c 'import tensorflow'
#  Now install tensoflow fold
#Install the pip package for Fold. For Python 2.7, this will be:
pip install https://storage.googleapis.com/tensorflow_fold/tensorflow_fold-0.0.1-cp27-none-linux_x86_64.whl
#for python 3.3
pip install https://storage.googleapis.com/tensorflow_fold/tensorflow_fold-0.0.1-py3-none-linux_x86_64.whl

#Test is installed successfully or not
python -c 'import tensorflow_fold'

If everything goes well. then test below example.

Next one

  1. Quickstart notebook
  2. Tensorflow fold Documentation
  3. TensorFlow: Concepts, Tools, and Techniques

There are other libraries and framework which are also supporting dynamic graph computation. Tensorflow fold is tensorflow based and it has their own approach to tackle this problem.

In this paper, Google introduced a new algorithm called ‘Dynamic Batching’, and developed a Tensorflow-based library called ‘TensorFlow Fold’, which solved the DCGs problem in both theoretical and empirical fields.
This is the experimental implementations, they proved that their method is effective and more efficient and concise than previous works.

Paper is here for more details.

Moral of the story is tensorflow is not only supporting tensors any more!!!!

Let us apply thoughts and let me know your experience.

By clapping more or less, you can signal to us which stories really stand out.

If you enjoyed this article, please don’t forget to Clap.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.