Explaining Machine Learning to my Grandfa.

Hi Grandpa! When I tell you “Machine Learning” what is the first thing that comes to mind?

Juan David Restrepo
7 min readJan 26, 2020

The challenge is to be able to explain to my grandparents that term so fashionable in recent years that already in the 50s it was already known, but the restriction of processing of amounts of data by computers had not been able to have the implementation and application they have for in recent years and the coming

Machine Learning algorithms try to learn from the data, and the more data available to learn and the richer and more complete the algorithm, the better.

What is an algorithm?

Basic Algorithm definition
Basic Algorithm definition

In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.

How tu build a Machine Learning Algorithm?

Ok Grandfa, Machine Learning is part of what is known as Artificial Intelligence (AI) and is focused on trying to emulate the way we learn human beings, that is, emulate how we learn through our senses and related experiences and memories that can be applied to new problems that are faced.

Based on this algorithm definition and taking into account that the concept of Machine Leraning is based on building data-based algorithms, we will try to explain below with an example how the process to build an algorithm for Machine Learning would be.

Ideally, training(In) data should be labeled(On). What is that of trained and labeled data? We must have the data organized and relevant with the information that can help us define a determining their behavior

For example, imagine that we want to have an algorithm that detects if a soccer player can score in the next match or not based on certain information and characteristics of the player.

Some of these features could be the size goals scored per season, statistics played minutes, power to kick, and other technical data of the player. Suppose we have a history with the characteristics of multiple soccer players who have scored against the team to face in the next game we have studied in the past, and we already know if each of these players was scored or not scored.

Basic training a leveled model data
Basic training a leveled model data

First of all, what we would do is provide our algorithm with all this data to “train it” and learn from patterns, relationships and circumstances of the past. In this way we will get a trained model.

Once we have this trained model, we can ask you to make a prediction by giving you the characteristics of a new soccer player (I), which we do not know if you can score in the next game or not. The model will be able to give us a prediction (P) based on the knowledge that it extracted from the training data.

What defines a “good” or “bad” algorithm is the precision with which it makes predictions in a given domain and context, and based on available training data. All these calculations have a very important mathematical and statistical base that, with the levels of data processing that can currently be performed, makes Machine Learning a very important tool in any industry or service companies.

“A town that does not know its history is doomed to repeat it” Confusio.

Well Grandpa! As you already know the basis for creating algorithms for Machine Learning, I am going to tell you some historical data about the beginnings of this concept and its relationship within what is known as Artificial Intelligence (AI)

Artificial Intelligence, Machine Learning and Deep Learning: Graphic
IA, ML and DL

Although there is currently a “boom” with the topic of Machine Learning, it is important to know the history and know that this field has not always been so prolific, alternating times of high expectations and advances with “winters” where it suffered severe stagnations.

Birth [1952–1956]

  • 1950 — Alan Turing creates the “Turing Test” to determine if a machine was really intelligent. To pass the test, a machine had to be able to deceive a human by making him believe that he was human instead of a computer.
  • 1952 - Arthur Samuel writes the first computer program capable of learning. The software was a program that played checkers and improved their game game after game.
  • 1956 - Martin Minsky and John McCarthy, with the help of Claude Shannon and Nathan Rochester, organize the 1956 Darthmouth conference, considered the event where the field of Artificial Intelligence is born. During the conference, Minsky convinces the attendees to coin the term “Artificial Intelligence” as the name of the new field.
  • 1958 - Frank Rosenblatt designs the Perceptron, the first artificial neural network.

First Winter of AI (AI-Winter) — [1974–1980]

In the second half of the 70s the field suffered its first “Winter”. Different agencies that finance research in AI cut the funds after many years of high expectations and very little progress.

  • 1979 - Students of Stanford University invent the “Stanford Cart”, a mobile robot capable of moving autonomously through a room avoiding obstacles.
  • 1967 - The “Nearest Neighbor” algorithm is written. This milestone is considered as the birth to the field of pattern recognition in computers.

The explosion of the 80s [1980–1987]

The 80s were marked by the birth of expert systems, based on rules. These were quickly adopted in the corporate sector, which generated a new interest in Machine Learning.

  • 1981 - Gerald Dejong introduces the “Explanation Based Learning” (EBL) concept, where a computer analyzes training data and creates general rules that allow it to discard less important data.
  • 1985 - Terry Sejnowski invents NetTalk, who learns to pronounce words the same way a child would.

Second AI Winter [1987–1993]

At the end of the 80s, and during the first half of the 90s, the second “Winter” of Artificial Intelligence arrived. This time its effects extended for many years and the reputation of the field did not fully recover until the 2000s.

  • 1990s - Work in Machine Learning turns from a knowledge-driven approach to a data-driven one. Scientists begin to create programs that analyze large amounts of data and draw conclusions from the results.
  • 1997 - IBM’s Deep Blue computer defeats world chess champion Gary Kasparov.

Explosion and commercial adoption [2006-Present]

The increase in computing power together with the great abundance of available data has re-launched the Machine Learning field. Numerous companies are transforming their businesses towards the data and are incorporating Machine Learning techniques in their processes, products and services to obtain competitive advantages over the competition.

  • 2006 - Geoffrey Hinton coined the term “Deep Learning” to explain new architectures of Deep Neural Networks that are capable of learning much better flatter models.
  • 2011 - The IBM Watson computer defeats its human competitors in the Jeopardy contest which consists of answering questions asked in natural language.
  • 2012 - Jeff Dean, from Google, with the help of Andrew Ng (Stanford University), leads the GoogleBrain project, which develops a Deep Neural Network using the full capacity of Google’s infrastructure to detect patterns in videos and images.
  • 2012 - Geoffrey Hinton leads the winning team of the Computer Vision contest to Imagenet using a Deep Neural Network (RNP). The team won by a wide margin of difference, giving birth to the current explosion of Machine Learning based on RNPs.
  • 2012 - The Google X research laboratory uses GoogleBrain to autonomously analyze YouTube videos and detect those that contain cats.
  • 2014 - Facebook develops DeepFace, an algorithm based on RNPs that is able to recognize people with the same precision as a human being.
  • 2014 - Google buys DeepMind, an English Deep Learning startup that had recently demonstrated the capabilities of Deep Neural Networks with an algorithm capable of playing Atari games simply by viewing the pixels on the screen, just as a person would. The algorithm, after a few hours of training, was able to beat human experts in some of those games.
  • 2015 - Amazon launches its own Machine Learning platform.
  • 2015 - Microsoft creates the “Distributed Machine Learning Toolkit”, which allows the efficient distribution of machine learning problems on multiple computers.
  • 2015 - Elon Musk and Sam Altman, among others, founded the non-profit organization OpenAI, endowing it with 1,000 million dollars in order to ensure that the development of Artificial Intelligence has a positive impact on humanity.
  • 2016 - Google DeepMind wins in the game Go (considered one of the most complicated board games) to the professional player Lee Sedol for 5 games to 1. Go expert players claim that the algorithm was able to make “creative” movements that are not They had seen so far.

Present…

Machine learning is already around us; written in the software on our cars, in our phones and homes and in the business software we use at work, helping us access information and make better and more informed decisions, more quickly.

Some of the most recognized companies that use Machine Learning
Some of the most recognized companies that use Machine Learning

Machine Learning is being used by large companies, researchers, students, etc. to optimize and improve daily and complex activities. Machine Leraning is being integrated with other concepts such as Neural Networks, Artificial Inteligenics, Deep Learning, among others. It is not known what will happen in the future Grandpa, but I believe that Machine Learning has already become part of our lives as the Internet once began to be.

It is a little clearer the scope and concept of Machine Learning for you Grandpa!!! And what remains to come, we hope they are positive things to solve the needs of humanity

Bibliography

--

--