Machine Learning — A Concept Introduction

Scout Curry

Since the development of efficient machine learning algorithms, engineers can make processes that were not possible before, without a lifetime of trial and error. We can run simulated programs with restrictions and rewards for accomplishing a task, so the more times the simulation is run, the program gets closer and closer to the goal, until it’s ready for going out into the real world.

The core of machine learning revolves around making a machine learn from experience. One often used situation is recognizing new pictures being shown. A machine learning program can make a determination about that picture (because it has had practice with training pictures beforehand) and can categorize it to do certain things. This is how smartphones track faces and add another level of security, even how smart cars can drive through the crowded streets of SF. Since the accessibility of really powerful computers is getting easier, this study is becoming a necessity in almost any company.

History

An example of a Turing machine

Machine learning is a field of computational science that far predates the existence of computers. The earliest contribution to the field was Thomas Reyes in 1763, when he created a mathematical solution to the probability of an event. It’s a critical formula for understanding how machines can eventually “learn” from behaviors, and still used today. It wasn’t until 1950 when Alan Turing came up with the idea for a “learning machine”, and was the basis for scientists to start exploring.

There are multiple categories of learning that are all useful for different types of jobs, each with their own algorithms, but i’ll just touch on the concepts of the most important types.

Supervised Learning

This structure was briefly touched upon earlier, where you have a set of pictures and you want the computer to track which portion of the picture is a face. It’s easy for humans to determine this, but the computer will have to use an algorithm to figure this out. Sometimes a decision tree is used, or a set of questions you can ask to determine what the computer is looking at, but this is used mainly for analyzing large amounts of information in databases.

For this example, we should use a SVM (Support-Vector Machine) which tries to look for things that are NOT faces, and stops when it makes a determination that a face exists or does not.

Our first step should be to have actual humans evaluate the pictures using the same questions we will give the algorithm, so that we have a point of reference what the result should be.

So we have our algorithm, now we need to feed it sample pictures. If the program worked, we can make a graph that will show all of the non-faces farther away from the graph, and the definite faces in the center. This should compare to the output the humans made as well. The graph should look something like this:

In this case, the x coordinate represents the human input and the y coordinate represents what the computer came up with, which is the basic graph that supervised learning uses.

Supervised learning still requires human guidance to be able to teach the computer to do it itself. It is still very useful for things we are naturally good at, like recognizing faces.

There’s another type of machine learning that i find to be the most interesting, and is only now peaking interest on its capabilities.

Genetic Learning

Image result for genetic algorithm machine learning

While we go through the example, it’s best to think about it is that it’s like Darwin’s Theory of Evolution, and the simple phrase “Survival of the Fittest”. All we do is give the program requirements for which data is useful, and eliminate the ones that are not. Survival in the example will mean going the farthest.

Let’s have a set of 500 robots with legs that have multiple muscles run through the simulation at once. In this first simulation, the robots will completely randomly twitch one muscle before slumping to the floor. We will select the robot that slumped farther than the rest, and only that robot will have offspring. It’s children will inherit that muscle twitch, but will move a random muscle again. This process goes as long as you keep making simulations, and eventually will reach something that will look like a somewhat normal run.

In this case, the data refers to what the computer learned after running one simulation. If we wanted to do the same thing without machine learning, we would only have to work with one robot but have had to move each muscle manually and hope that it works. This also would project our own biases into the program on how the robot should run. It would probably look more like a human’s running motion, but sometimes you’re trying to find the most effective way to solve a problem, which might not be obvious and look strange.

There are plenty of other learning types that i did not touch upon, like unsupervised, reinforcement, and much more that i want to further my knowledge in before i start writing about it. Education is a process, and im excited to see much more real examples of this groundbreaking science thats changing the world.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade