Who’s afraid of Machine Learning? Part 3 : About That Learning

Intro to ML (for mobile developers)

Britt Barak
Google Developer Experts
5 min readSep 18, 2018

--

Last post described how to create an Artificial Neural Network (ANN), inspired by the way our brains work. Basically how to create an algorithm that takes data and makes a conclusion.

Many numbers on the previous post were merely guesses or random numbers. You might say: “wait, but you just told us you made up a bunch of stuff! What’s up with that? How could this give us something that we can trust?”

Teaching, learning, training

We will now “teach” the model, the same way we were to teach a baby:

We’ll take a data set, meaning a bunch of images with labels that fit each image. We’ll give the model each image and ask: “With your current model, is this a strawberry or not?”. After the model runs and produces a conclusion, we can compare it with our original label, and say “yes, you were right! It was a strawberry” or “no you were wrong”

Then, according to our feedback, the model can tweak the numbers that I guessed before.

The input can’t be tweaked, of course, since the image and its features are objective. But it can tweak the weights and the bias that I guessed (the blue and green numbers on the image). By tweaking them, the model can make sure that next time we will get the right conclusion. Also, we can make our conclusion more accurate, meaning to get a higher probability for the result label.

This process of giving the model many many many images with results, and tweaking the numbers to better fit, is called training. This is the heart of the “learning” process that the model goes through.

The goal, of course, is that after this model is trained enough, it will be able to take any image, that wasn’t a part of the training process and produce an accurate enough conclusion. That is because it knows well enough how to separate the image into features, give the right weights, and create a fit calculation for deciding how do the features affect the final conclusion.

The Hidden Layer

Before continuing, there’s one more thing I wasn’t precise about, and I would like to point out:

I said that there are one input, one output, and one middle layer. Actually, this middle layer often encapsulates many layers that do a similar computation as we discussed. We usually call all of these middle layers a “hidden layer”, as it hides a few layers, which together are the heart of the computation and the conclusion made.

Usually, each layer inside this hidden layer is in charge of a different detection task. The layers that are closer to the input will be in charge of detecting more abstract and “simple” features, and as we get closer to the output, the feature that is detected get more and more complex. For example, the first layer after the input will be in charge of detecting short lines and simple edges, the next layer can detect longer lines, and maybe corners. As we get closer to the output layer, the layers can detect simple patterns, then more complex patterns, then shapes, maybe a leaf, maybe an eye… It’s like assembling a jigsaw puzzle. We gather the pieces to reveal the simple details, then more complex details, until at the end we’re ready to sample the complex details and say- “ok, this is a strawberry”.

Ryoji Iwata on Unsplash

Who does it all?

The craft of choosing the right model, the equations, the “tweaking” algorithm, gathering the data for training, conducting the training and much more, is the job of a data scientist. Developers will often not be the ones to engage in it.

We can talk about how to do that and might do so on later posts. It’s a fascinating craft! but an important point is that it’s a whole other profession, a whole other world. It’s basically the same as learning how to practice yoga or to bake a cake. We can! But it’s a whole other topic and skill.

Most often, us, developers will get the trained and prepared model as a gift from the data scientists. The developers will be in charge to integrate the model into the application, give it the input, run in, and get the output. That’s all.

So from now until the rest of this blog posts series, we’ll assume that we already have this gift, a model that we can trust its conclusion making. Following posts will explore how to use it.

This is it for this time!

Next time we’ll be introduced to a great tool to implement an actual model for labeling an image… ML-Kit✨ See you there: bit.ly/brittML-4

If you missed the previous parts, start here for ML intro for devs: bit.ly/brittML-1

Thank you for reading! 👏❤🐸🍓

--

--

Britt Barak
Google Developer Experts

Product manager @ Facebook. Formerly: DevRel / DevX ; Google Developer Expert; Women Techmakers Israel community lead.