How good is TensorFlow as a deep learning library and what other libraries should one look out for?

I just answered this question on Quora and thought I’d share this with you here since I am spedning gobs of time playing with a few of these frameworks — primarily Theano/Keras, TensorFlow and Lua (Torch and PyTorch)

Here is a [hopefully] more balanced view of the world of Deep Learning libraries as we [I] know it.


There’s Lua as Alexey posted and you have Torch and PyTorch. Contrary to what folks might believe Facebook, Twitter and several other parties are building it. Facebook recently open sourced PyTorch.

Take a look at who’s supporting and building it.


  1. Easy to read and write
  2. Runs on GPU
  3. Enough pre-trained models to play with
  4. And you have PyTorch


  1. Not enough commercial backing though its building
  2. Not large enough community behind it for more commercially accepted models. This could change as Nvidia, Salesforce are super focused on their enterprise customers.


Probably one of the oldest deep learning framework — born and bred in academia and written in Python. Handles multi-dimensional arrays like Numpy etc. Believed to be best suited for academia and data explorations.

Many open source deep learning libraries like Keras, Lasagne, Blocks are built on top of it.


  1. Python, Numpy love is there
  2. Computation graph is ok fits stuff like RNN fairly well
  3. Decent high-level wrappers for Keras etc


  1. Still bulkier
  2. no multi-GPU support
  3. Trouble when working in cloud environments

As I mentioned, her offsprings are a lot more interesting. lets look at her children

P.S: Someone has done a neat job of comparing Theano vs others here at Github.


Keras is a decent deep learning library that can run on top of Theano and Tensorflow. It has a cool Python API developed in similar lines as Torch.


  1. Cool Python API developed inspired by Torch. It provides a high-level function , for instance you can programmatically build you own model here.
  2. Pretty flexible — works with Theano, TensorFlow, CNTK etc
  3. There is interest in the community for its further growth
  4. Pretty simple to create your own models.
import keras.layers as L import keras.models as M my_input = L.Input(shape=(100,)) intermediate = L.Dense(10, activation='relu')(my_input) my_output = L.Dense(1, activation='softmax')(intermediate) model = M.Model(input=my_input, output=my_output)


  1. Reported performance issues behind TensorFlow
  2. Needs to do more work in gain love from the commercial community. There’s a lot of passion in developers and research.
  3. Distributed learning could be better integrated.

*I’d suggest you look up Lasagne and Blocks yourself. There’s also Kur, which is built on top of Keras.

[4] TensorFlow— The Rising Prince of the Valley

Created by Google and goes deeper than just Deep Learning, it’s actually supports tools for reinforcement learning which is nice. Still developing and unsure what sort of commercial application of TensorFlow will come, for now its application in Google’s own cloud would be great for its cloud customers, especially with custom-made TPU.


  1. Python, Numpy support
  2. Good computational graph absraction like his step-mom Theano.
  3. Fast compile times than Theano
  4. Tensorboard is nice, I really love it’s functionality!
  5. Extremely popular within the development community as of 2017


  1. Still slower than other frameworks
  2. Still not Torch like
  3. Not enough pre-trained models for practical use (we have all run MNIST, Iris sets by now)
  4. No commercial backing (besides the fact that the world knows Google backs it)
  5. Drops in/out of Python to load each batch set, so performance wise still some work needs to be done.
  6. Still very keen to see its commercial application in enterprise — if there will ever be any, for instance dynamic typing when running large development projects.

[5] Caffe & Caffe2

Caffe: It’s well known and widely-used that ported Matlab’s kit of Fast Convolutional Net to C and C++. Although not — AFAIK, intended for text, audio, time series data.


  1. Good feed forward nets and image processing
  2. Train models , no need to write code
  3. Good to have Python interface


  1. Not so great for Recurrent Networks
  2. Can it handle big networks? (ImageNet, GoogLeNet etc)
  3. Commercial and enterprise love lacking
  4. Is it still alive?

Caffe2 — however has some form of backing from Facebook, maybe because its creator went working for them.


  1. It’s scalable
  2. Lightweight
  3. BSD license, that helps.


  1. Will it come to the enterprises?
  2. How long will it live?


[1] Microsoft’s CNTK — There’s more work happening here than you can imagine.

[2] MXNET — (adopted by Amazon AWS and Apple too is secretly building stuff into its APU chip with it after buying Carlos’ startup called Turi)

There are others BigDL (Apache Spark), Paddle (led by Baidu), DyNet pushed by Carnegie-Mellon, of course Amazon’s own Tensor engine DSSTNE and I’m sure lot’s more is coming.


These are still early days but it is quite important for developers, enterprises and commercial parties to choose wisely their both development as well as production workloads and adopt models that work best for them.

Time, after all will be the most important resource we will have to manage as the Deep Learning ecosystem will explode with libraries, toolkits and HW/SW implementations.

Good luck and keep learning #DailyLearningMode

Post Scriptum

As for a quick view of Framework’s comparison (Note: This is from 2016)

and as for Design Choices (also from the same deck — which you can find here)

Originally published at