Deep learning in Go

The battle for Cloud — Story of “GO”KU

karthic Rao
9 min readMay 3, 2019

Deep learning and Machine learning hasn’t quite been the stronghold for Go! But, the enthusiasm for AI in the GO community has been growing.

This fun-filled, illustration based article, talks about the fundamentals of Machine learning and sheds light on the state of Deep learning in Go.

This is the Story of the “GO”KU and his quest to conquer Deep Learning!

Introducing “Go”Ku!

GO”ku in the quest to collect all the Dragon Balls to be able to win the Cloud wars.

“GO”KU waged hard-fought battles in pursuit of the Cloud Dragon Balls!

This was the beginning of Cloud wars!

And then, “GO”ku was successful with the conquest of the Cloud Native, API, Devops and Database Dragon Balls. This led to the creation of multiple amazing projects in Go.

DGraph (A Graph Database), Minio (An Object Storage ), CockroachDB (Distributed SQL), Etcd (Distributed Key-Val store), Kubernetes (For automating application deployment, scaling, and management) and Docker (No need for the mention) emerged from “GO”Ku’s possession of the DB’s!

“GO”Ku is not done yet! He is getting prepared for the most gruelling battle so far! The battle for AI/ML Dragon Ball.

Deep learning in Go!

Let’s do some explicit programming to understand what Machine learning is?

Game of functional mapping

Consider the table above. What’s the relationship between X and Y?

Well, that’s easy isn’t it. Y = 2X

Let’s see something that’s slightly difficult.

What’s the relationship between X and Y here?

In this case its slightly complex, its Y = 2 * X * X

That’s what we mean when we that say the relationship between X and Y is more complex!

Machine learning or Deep learning is used when you want to automate this process of finding a functional mapping between inputs and outputs on a large number of examples, thus finding the relationship/pattern which can later be used to make a future prediction to take valuable business decisions.

As the complexity of relationship between X and Y increases Deep learning starts to shine!

X and Y can be different! If X is text in English and Y is its corresponding text in French, once you find the pattern between them using Deep learning, what you essentially have is an AI for language translation! The same applies for a variety of X’s and Y’s.

Now imagine X to images of Apples and Oranges, and Y is either 0 (for Apple) or 1 (Orange).

Now let’s write a program with explicit logic to distinguish between them,

Let’s say explicit programming works for classifying between Apple’s and Oranges.

But how about writing explicit rules to distinguish between Dog breeds!?

It's a hard problem, in this case, you would let Deep learning do its Magic.

You feed the Deep learning code with a large number of example images of different dog breeds and let it find a pattern in them. Later it could use this understanding to be able to predict the breed of new dog images.

Hey, there are so many breeds of dogs and these images could look very different? How could it understand this pattern in pixels just with few images and identify the pattern?

We’ll as rightly guessed, it's not possible to generalize the pattern just with few images. You need a lot and lots of data to be able to do this.

How does Deep learning do what it does?

Deep learning uses the concepts of neurons and weights.

Weights are the edges in the Neural Network, consider them as the tunable knobs. The nodes are the data using which we try to find the pattern between X and Y.

The Learning / Training in Deep learning involves finding values for these tunable knobs called weights so that the functional mapping between X and Y is as accurate as possible. The algorithm used is called as a Model. The entire process of finding values for these knobs are computationally intensive, so you would save these values of weights/knobs, which would be later used to make predictions. This is called the Saved model / Trained model. These values can be later loaded into another program without having to train the model again.

We’ve already seen 5 major aspects in the Deep learning process,

  • Data
  • The Algorithm / Model.
  • Learning / Training
  • Prediction
  • The trained model.

Here are some exerts from the Tensorflow official site which gives an amazing explanation about Machine learning

How to do Deep learning?

Welcome to Tensorflow?

TensorFlow is an end-to-end open source platform for machine learning.

Tensorflow is not just a Machine Learning specific library, instead, is a general-purpose computation library that represents computations with graphs. Its core is implemented in C++ and there are also bindings for different languages. The bindings for the Go programming language, differently from the Python ones, are a useful tool not only for using Tensorflow in Go but also for understanding how Tensorflow is implemented under the hood.

Tensorflow fundamentals: Tensors and computation graphs

First, we’re going to take a look at the tensor object type. Then we’ll have a graphical understanding of TensorFlow to define computations. Finally, we’ll run the graphs with sessions, showing how to substitute intermediate values.

Tensors

In TensorFlow, data isn’t stored as integers, floats, or strings. These values are encapsulated in an object called a tensor, a fancy term for multidimensional arrays. If you pass a Python list to TensorFlow, converts it into a tensor of the appropriate type.

You’ll hold constant values in tf.constant.

Tensors and constants in tensorflow

You can perform computations on the constants, but these tensors won’t be evaluated until a session is created and the tensor is run inside the session.

Tensorflow Session

Everything so far has just specified the TensorFlow graph. We haven’t yet computed anything. To do this, we need to start a session in which the computations will take place. The following code creates a new session:

sess = tf.Session()

TensorFlow’s API is built around the idea of a computational graph, a way of visualizing a mathematical process.

Running a session and evaluating a tensor

The code creates a session instance, sess, using tf.Session. The sess.run()function then evaluates the tensor and returns the results. Once you have a session open, sess.run(NN) will evaluate the given expression and return the result of the computation.

After you run the above, you will see the Hello World printed out:

import tensorflow as tf# Create TensorFlow object called hello_constant
hello_constant = tf.constant('Hello World!')
with tf.Session() as sess:
# Run the tf.constant operation in the session
output = sess.run(hello_constant)
print(output)

Computation graph

The computations are structured in the form of a graph in tensorflow, Let’s dive a bit more into the computational graph and understand how computation graphs are formed,

Consider a simple computation. Let a, b, c be the variables used,

J = 3 * ( a + bc )

There are 3 distinctive computational steps to arrive at the final value J, let’s list them out,

These are 3 computational steps which can be represented as a graph,

Note: Refer to one of my previous blogs to continue reading more about Tensorflow https://medium.com/ai-india/hello-world-tensorflow-6ce3f5bcbb6b.

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

Let’s create a simple Graph of computation to multiple two matrices,

 A = [  1   2   ] 
[ -1 -2 ]
x = [ 10 ]
[ 100 ]

It's really simple to achieve this Python. Here is the link to notebook where you could run the code

State of Deep learning in Go

The Go binding API for Tensorflow is designed to use trained models and not for training models from scratch

TensorFlow provides APIs for use in Go programs. These APIs are particularly well-suited to loading models created in Python and executing them within a Go application.

Let’s see how the program to multiply two matrices look in Go,

Credits: Galeone Github

This doesn’t seem to be easy and intuitive at all!

Loading a trained Inception model in Go

Here is an example test from Tensorflow Go repo to load an inception model and make the prediction.

Inception v3 is a widely-used image recognition model that has been shown to attain greater than 78.1% accuracy on the ImageNet dataset. ImageNet is an ongoing research effort to provide researchers around the world in an easily accessible image database.

Let’s take the trained model, use Go to load it and make predictions,

$ git clone https://github.com/hackintoshrao/inception-go.git
$ cd inception-go
$ docker build -t inception-go
$ docker run -p 8080:8080 --rm inception-go

Summary

  • If you are a Deep learning engineer who is comfortable with Python, there is no need to use Go for Deep learning!
  • But if you are Go developer with aspirations on developing Deep learning applications then it makes sense to abstract yourself from the Deep learning concepts, take the trained model built using python, load it in Go, Containerize it and build applications.
  • Deep learning community comprises of a wide range of audience. Mathematicians, Scientists, Statisticians, Developers, Doctors, Physicists and much more. Python doesn’t look intimidating for non-developers.
  • Deep learning training is not expected to run in real time.
  • The algorithms are computation and memory intensive, the performance gain via just transitioning to Go would be not so significant!

All I can say is!

GOku’s goal to conquer AI/ML Dragon Ball seem to be farfetched as of today! Let’s hope that he gets there for the good of the Go community.

Let’s cheer for him,

GO GOku GO !

For more articles and updates follow me at twitter.com/@hackintoshrao.

--

--

karthic Rao

Co-founder at Stealth. Code with Love, Learn with Passion, Feel the music, Live like a hacker.