Who’s Afraid of Machine Learning? Part 1 : What do they all talk about?!?
Intro to ML (for mobile developers.)
Lately, it seems like everyone is talking about AI, ML, DL… When this hype started, I got a little stressed by all these new terms. What does it all mean for me as a developer?
I’m not a data scientists, nor a machine learning expert. I’m a very curious mobile developer, who did her research. I created this series of blog posts and talks, that explains things the way I wish they would have been explained to me.
This series is meant to explain some basic concepts in ML, in an easy to grasp way. It will then demonstrate how you can put ML into practice with ML-Kit and TensorFlow Lite. It will not provide in-depth data science knowledge but is intended to provide a practical intro with what a (mobile) developer needs to know in order to get started to make smarter apps (or.. apps with ML capabilities 😁)
In this series:
- Part 1: What do they all talk about? (← you are here 🍓)
- Part 2: How to make a machine learn?
- Part 3: More about that learning
- Part 4: Going Mobile! ML-Kit why and how?
- Part 5: Using a Local Model (coming soon ✨)
- Part 6: Using a Cloud Model (coming soon ✨)
- Part 7: Using a Custom Model (coming soon ✨)
Human beings, since forever, have been fascinated by how nature works, and how can they use it for their own benefits.
For example: when human wanted to create a machine that can fly through the air, what did they use for inspiration?
They used the anatomy of wings and chest of birds!
And when they wanted to create machines that can detects objects under water, or in the dark where they barely can see, what did they use for inspiration?
They were inspired by bats and dolphin to create sonar based detectors!
In the 1990s engineers in Japan created the bullet train, inspired by… bullets.. not so much of nature 😊 It was designed to ride very fast. It can get to around 180mph / 300kph. The problem with it was that it was very loud when it when through tunnels you could have heard it for miles away. What did scientists use for inspiration?
There’s a very small adorable bird called King Fisher. It’s special for being able to fly very fast, dive in water, catch a fish and go out with barely making a splash! That is thanks to the structure of it’s beak and head, which scientists used as inspiration to fix the bullet train.
And when humans wanted to create a machine that can learn, to take data and make conclusions out of it. What could they have used for inspiration? what in nature learns…. you guessed it! the brain!
How do brains learn?
Our brain contains many particles called nerve cells, also knows as neurons. Each neuron is basically just a small piece of information, a bit of data.
How many of these do we have in our brain? 100 Billion! Sounds like a lot. But how much is 100 Billion? If you take 100 Billion pieces of paper and you stack them one on the top of the other will get to 5,000 mile / 8,500 km high, which is like the distance between London and Los Angeles [Dr. Joe Dispenza.]
One neuron, one bit of information, doesn’t mean a lot. It’s like a letter inside a whole book. When a bit of data has no context it means only a little. A letter gets connected to a letter to form a word, which has more meaning than just individual letters. Then, a word gets connected to a word to form a sentence, which has even more meaning. When we connect the sentences together, a book is formed, which has a whole lot of meaning.
The same way, neurons, our bits of information, tend to be connected to other neurons of the same context, in actual physical connections, called synapse. One neuron actually has many connections (or synapse), between 10k-40k. The connected neurons form in our brain sort of communities or networks, which we call Neural Network. For each skill or habit of ours, we have a community with the neurons, the information for that skill or habit. That way, we’d have a network for playing guitar, for example, and a network for practicing yoga, and a network for developing for Android…
Whenever we learn something new, a new neuron is created and gets connected to the other related neurons. So whenever we learn something new, there’s an actual physical connection, or more precisely thousand of them, made in our brain.
In this video is REAL physical evidence for learning. A new connection is made between neurons. It’s in slow motion, and the quality is not that great.. but..How awesome is that?!?
That’s it for now
That’s a little of what happens in our brain. How do we translate it into a machine? and make the machine learn, like our brain does?
On the next post I’ll show an example, to make that concept tangible for us. It will be a very simplified one, and there are many variants for anything that I’ll mention. But the idea is to give us a tangible sense of how it may work.
See you there! ( → http://bit.ly/brittML-2 ←) 😍👏🍓
Links:
This series is base on some talks I’ve given.
Slides for the updated Version2 are here: bit.ly/BrittMLKit. Videos from conferences are coming soon.
Version1 was given before MLKit. A few videos are here (Android Makers 2018) and here (Chicago Roboto 2018).