It’s no longer fiction folks. Machines can finally learn how to do some basic thinking like we do on a daily basis. Though not like we can mostly but quite close. In the near future, I expect to see AIs built into phones and everyday personal devices handling tasks for us like calling the plumber if you have a broken pipe or fixing your appointments the way you like it. But what is AI really and how did we get this far?
Artificial Intelligence is the intelligence inhibited by software or machines.
To the everyday Joe, Computers are pretty much intelligent already. It’s common knowledge that computers exist that can simulate the movements in the galaxies and tell how close to our dear planet, a meteorite would fly by. Well yes but can it (the machine or software) tell what’s in the picture? Well… No. So you can see now it’s not about solving the problem; it’s more about learning to solve the problem. You can call that Machine Learning and one of the branches we are going to talk about is Deep Learning.
In 1959, Arthur Samuel defined machine learning as a “Field of study that gives computers the ability to learn without being explicitly programmed”
The way we think for example is quite fascinating don’t you think? Our brains are made up of cells called neurons and each neuron communicates with the other deciding on what needs to be said or done. That’s is a huge amount of neurons doing some serious tasks, no wonder we need food and lots of sleep! To achieve this in a computer (artificial neural networks), we will need to have groups of mathematical functions bundled up in an organised manner and say “hey guys, learn to do this”. Here is how this would work.
We stack up neurons to receive input, process (based on special mathematical algorithms) and give an output. So say we wanted to know what was in the picture. The neuron in the bottom layer is going to pick a tiny piece of the picture and make some calculations using it (based on the algorithms again). It does not understand anything about the image in question but what it does understand is that it’s giving a signal that is useful to another neuron’s calculations and in turn will give a signal to yet another neuron and so on. At this point you would already know that there may be layers upon layers of neurons doing actual work. At the top of the neuron layer chain are two neurons who are tasked with output. They look at the all the computations and make the decision about what’s in the picture. And Yes, it's a slow process especially when compared to the power of the human brain who can look at a cat in any form, shape or size and say “Oh that’s a cat”.
Deep Learning are of two types. Supervised and Unsupervised.
Supervised deep learning involves simply showing the software or computer how to deal with a task of specific classification for a significant amount of time. More like:
This is a picture of a cat.
This is a picture of a cat walking.
This is a picture of a cat lying down…
And then at the end you ask what’s this… And it goes, “it is a cat”
And you ask what is it doing … and it goes, “it is walking”
In unsupervised deep learning, you leave the algorithm to decide based on patterns it picks from the data. This can be very powerful in some applications like personal assistants and games as it learns what to do based on patterns it gets. Its application is endless.
There is still a long way to go when it comes to machine learning and perfecting the artificial intelligence unit. Will they learn to be sarcastic? Who knows. Once we figure out how to make them work faster and run on better hardware we may possibly see a lifelike AI as seen in the HALO series.