How Computers Learn — Explaining the Magic of AI
In every aspect of your life, you’ve probably heard the term artificial intelligence. You’ve probably also heard about Deep Learning, Neural Networks and Machine Learning. All these “buzzwords” claim to completely change the way that our current digital world will work. But they also pose themselves as a complex topic that’s hard to enter due to the high levels of technicality.
For many people, machine learning, Artificial Intelligence (AI), Deep Learning and Neural Networks are all the same. But the reality is that each of these are very different.
However… all three are interconnected. IBM has said:
Perhaps the easiest way to think about Artificial Intelligence, Machine Learning, Neural Networks, and Deep Learning is to think of them like Russian nesting dolls. Each is essentially a component of the prior term:
Machine Learning— Expert Detective
You can think of machine learning as an expert detective. It has the power to recognize patterns in complex data that has so many different attributes it would be unrealistic for a human to do it. The biggest challenge for machine learning is the training process. There are three common methods to do so these primarily being: Supervised, Unsupervised and Reinforcement Learning.
Supervised learning is one of the most straightforward learning processes to grasp. The Machine Learning model receives training data that is labelled and then uses certain algorithms to determine similar factors between different data points with the same label. By repeating this process multiple times and having a penalty when wrong decisions are made, the model gets so confident at finding patterns that when an unlabelled set of data comes through and it recognizes a pattern it assigns the associated label.
We can think of this just like a person at school. If we give 10 pictures of an ice cream cone and 10 pictures of a chocolate bar and tell the child which ones are which, we can train the child by asking them what are some similarities within each category (i.e chocolate bars are brown and rectangular while ice cream cones are pointy a colourful). The only difference between the person and a computer is that a person has senses and tools that allow it to classify or discriminate very easily while computers need lots of repetitions to get a grasp of the pattern. But once the person or computer is trained, you can give them a picture of an ice cream cone or chocolate bar without telling them what it is and based of the criteria (i.e. Is it brown? Is it pointy? etc.) it can provide a label with some amount of confidence.
Unsupervised learning on the other hand looks for patterns that we as humans have not seen yet. Unlike supervised learning, unsupervised Machine Learning models use themselves to dictate the patterns that it finds. By looking through a high volume of unlabelled data, unsupervised learning has a better chance of finding relationships between different factors.
Lets look at it like this: Pretend you are an Machine Learning model who has to look through billions of different search queries and look for trends. In this case there is no way to train for every classification and relationship between all the factors. But by giving time and lots of data the model could start finding out that people who search up “chicken recipes” would like ads about McDonalds. With this being said, unsupervised learning is still not completely hands off. Usually humans will need to validate the end result or give small hints so that the model is more effective and does not go off path.
Reinforcement learning works on the basic principle of maximizing reward and minimizing the negative consequences. On this principle, reinforcement learning lets the ML model make decisions which are then judged positively or negatively which then changes how the model makes its next decision. This creates a feedback loop where positive changes are rewarded and then applied in a brand new situation.
Neural Networks and Deep Learning — Computers Are People Too!
Honestly speaking, a considerable number of innovations that the human race has “created” has taken heavy inspiration from nature. It’s a similar story in the case of Neural Networks and Deep Learning Networks. Neural Networks try to replicate human neurons (which is what they’re named after) by either firing (Output = 1) or not firing (Output = 0) which then has a cascading impact on the next layer of connected neurons (similar to our brain). By considering inputs and assigning them a weight and then adding an extra bias, the summation (as seen below) will give a result. This can be run through an activation function to bring the result to something more useful for computers often to a 0 or a 1 like through a Binary Cross Entropy function or with a Leaky ReLU.
Deep Learning is a qualifier of Neural Networks that indicated that a model has more than three layers (including the input and output layers) making the complexity significantly greater and by extension “deeper.”
Applying AI with Generation— GANs
As seen earlier, there are established ways of classifying and organizing large volumes of data. But the other part of that coin is the ability to create new data that is not just derivatives of training data. This is a particularly difficult problem to solve just due to the abstractness of the concept of “creating” something. Although there are many ways to enable AI to generate (create) items, one recent proposition and very flexible idea is that of a generative adversarial network (GANs).
GANs: the Inspector vs the Forger
The common example that is given to visualize how GANs evolve overtime is that of the Art Inspector against the Art Forger. At the beginning each are both equally as inexperienced. Both have access to a set of real (and valuable) artwork and the forger tries to generate a similar version of that (very poorly to begin with) while the inspector will get the fake plus multiple reals and will have to discriminate between them. At first since even the inspector is very inexperienced it will need lots of resources to distinguish, but the forged one will obviously be an outlier. This gives the inspector some clues on what to look for (increasing his ability to discriminate better next time) and gives the forger clues on what not to do (which results in better generations in the future). Repeating this process hundreds if not thousands of times where both parties slowly better their respective skills, will yield a much stronger inspector and forger to the point where the inspector’s ruling will be 50/50 meaning that the forger’s skills are almost perfect.
Applications of GANs
GANs until this point have been used disproportionally on images. However many of the results are astounding. Creating brand new images from merely random noise has become very sophisticated. Websites like thispersondoesnotexist.com show how realistic faces have gotten.
Another area of image generation is image substitution. For example, Google’s Pixel 6 device has the ability to crop out whole people and buildings with the power of GANs. Using the surrounding context, these Convolutional GANs generate what would likely be in that space. Similarly GANs can be used to change one element into another element for example, by changing a horse into a zebra.
Future of GANs
The future of GANs lie in the other realms of the digital world. For example the area of video generation by GANs is still in its early stages and that can lead to the video substitutions where the GAN guesses frames in a video clip based of previous and following frames. GAN research into audio is also an area that is not as advanced as imaging. Some examples like wavGAN and museGAN have made examples but progress can be made.
More AI and Abstract Classification — NLP
NLP otherwise known as Natural Language Processing is another area of Artificial Intelligence focusing on our ability to communicate with computers. Most humans speak in very abstract terms which makes sense to other people but is very difficult for computers to understand. By tokenizing a phrase and by assigning it parts of speech labels, computers can slowly understand what is happening around them.
We can basically classify natural language into the understanding of language and the generation of language. Right now a lot of focus is towards just the understanding aspect (i.e. sentiment analysis or context identification) which is the key to the generation aspect. This is not a simple task which is why many of the chatbots we use, still rely on yes or no phrases or act as a marginally better search engine.
The Current and Future Applications of NLP
As mentioned earlier, one of the major areas for NLP is communication with a human whether that be through a chat bot, Siri or something else. However, NLP opens doors for a massive untapped pool of wealth by information. The era that abstract languages can be processed and their meaning can be derived at a mass scale like that of Twitter posts, means a huge incentive for tech giants to work in this area. Companies like Amazon already use sentiment analysis to categorize reviews and tailor recommendations. On the other hand language generation at its best form has the potential to create content like books, poetry or scripts.
AI is the Future
It’s not an unreasonable conclusion to say that AI is going to be an irreplaceable cornerstone of future civilizations just as it is now. Of this, one of the biggest high impact areas is that of Machine Learning and Neural Networks which completely reshapes our perspective for solutions to problems we thought were impossible to solve.
GANs and NLP are definitely not the only application of AI but they are the two areas that I am looking most into. They key dilemma is how this technology is going to be used in the future. Because, with great power comes great responsibility.
- Artificial Intelligence is different from Machine Learning which is different from Neural Networks and Deep Learning
- Machine learning can be sub-classified into three areas: supervised, unsupervised and reinforcement learning
- Neural networks are algorithms that take in inputs, biases and weights and slowly adjust them for the desired effect
- Artificial intelligence have numerous applications; usually based in classification and creation
- GANs are an example of both where two models compete to create the best brand new data from random noise
- NLP is an example of both areas although it highlights the difficulties of classifying abstract concepts
This is definitely the tip of the iceberg and topics only get more complex from here. However if you enjoyed reading this consider giving it a clap (👏🏽) and subscribe. All my links are below :)