Machine Learning is a mix between hardware and open source software
The picture above captures the architecture of a Machine Learning system to a tee. High performance hardware with pre-installed open source Deep Learning Frameworks.
Let’s look at the frameworks in more detail:
Caffe is a deep learning framework developed by Berkeley AI Research (BAIR) and by community contributors. Written in C++. Not totally unlike https://spark.apache.org/ that also was developed at Berkeley.
Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. And the Caffe source code is hosted on Github.
Caffe is typical of many deep learning libraries, that often originate at universities.
And how do we actually use Caffe? Adil Moujahid has written a great blog post on that very question: A Practical Introduction to Deep Learning with Caffe and Python. He uses a dataset from Kaggle, recently acquired by Google, comprised of images of dogs and cats, and demonstrates how to build a machine learning algorithm capable of detecting the correct animal (cat or dog) in new unseen images.
The Torch library is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first with interface to C, via LuaJIT. And just like Caffe, the Torch library is hosted on Github.
TensorFlow is well known, an open source software library for numerical computation using data flow graphs, originally developed by researchers and engineers working on the Google Brain Team in C++ and Python. And again with source code on Github.
There are any tutorials on how to use TensorFlow at this link.
Theano was developed at the Université de Montréal. It is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It is integrated with the NumPy library for scientific computation and the code is on Github.
A deep learning tutorial with Theano can be found at this link.
Chainer , developed in Japan and recently embraced by Intel, is another Python-based deep learning framework. It supports CUDA computation ( CUDA is a parallel computing platform and programming model invented by NVIDIA) , only requires a few lines of code to leverage a GPU and runs on multiple GPUs. It supports a number of different neural network architectures such as feed-forward nets, convnets, recurrent nets and recursive nets. And the Chainer code is on Github with a good set of tutorials here.
Of the supporting libraries, Bazel is a build system, NCCL is the NVIDIA Collective Communications Library, OpenBLAS is a library of Basic Linear Algebra Subprograms, and DIGITS finally is the NVIDIA Deep Learning GPU Training System that can be used to an be used to train deep neural network (DNNs) for image classification, segmentation and object detection tasks.
And what are the differences between the frameworks above?
A powerful combination of hardware and software. Highlighting the strong position of C++ and Python in the Deep Learning space. And on the hardware side the vital role played by the GPU.