The Eliza Effect
Published in

The Eliza Effect

A Basic Machine Learning Workflow in Production

How Machines Learn

In 1485, Leonardo da Vinci studied the flight of birds in attempts to create artificial flight.

Like all things artificial, AI requires a solid definition for intelligence and design for how to get there. In 1485, Leonardo da Vinci studied the flight of birds in attempts to create artificial flight. In 2017, Google designs hardware and software to efficiently replicate artificial neurons in hopes of advancing Artificial Intelligence, prioritizing the pursuit of artificial learning.

How do you learn to play a board game? Do you like to study the directions until it all makes sense, or do you like to learn as you go? Just like humans have different ways of learning, Machines also learn through a variety of practices. Unlike humans (where the majority of knowledge is learned), there’s a lot more that a computer is programmed to directly know than a computer is programmed to learn. In fact, most functionalities of machines today are directly (formally and algorithmically) represented for the computer. That’s because machines haven’t been as successful at learning as we are. However, faster computers (that enable smarter algorithms) are showing a lot of promise for Machine Learning.

Technology’s ability to store and process data is one of the biggest correlations to the recent improvements in ML. More so than other areas of Software Engineering, Machine Learning is more about data processing than programming. This is why people with backgrounds in Statistics or Big Data are likely working with Machine Learning alongside those who work in AI.

In practice, Machine Learning is often applied to problems that involve some form of prediction. For example, have you ever played Twenty Questions?

Akinator: A Simple Example

Rather than exercise a person’s deductive reasoning through a game, imagine playing with a computer. You’ll be asked by the website to: “Think about a real or fictional character, I will try to guess who it is.” The aim of the interaction is to collect enough information on what you are thinking to be able to make an accurate prediction.

This was my introduction to Machine Learning when I taught CS in Kazakhstan

From teaching, I know (first hand) that Akinator is able to deduce popular characters from Kazakhstan, China, India, Peru, Chile, and the United States. How does it do this? Rather than jump into the Linear Algebra, I play this simple game, asking deep questions like: how does anything have meaning? And how do we distinguish one thing from another?

So, how did Akinator know I was thinking of President Nazarbaev of Kazakhstan? A student’s first thoughts are relative to how they come to know things, through Google and Wikipedia, which I tell them is a very hand-wavy answer. It, of course, wouldn’t be impossible to scrape this information from official bios and descriptions of public figures, but how does it go from a written description to an interactive 20Q game?

Someone must’ve taken information from all the famous people and characters (of all time) and given it to the computer program! “Someone?,” I ask. This is also a legitimate answer, as it could’ve been one person’s job to input all the information for the Kazakh president. However, “this is unlikely the work of one person, as there is clearly a better way to do this,” I respond. If you look at the end screen of Akinator, you’ll notice, it asks for confirmation of its prediction.

The resulting yes/no can be used to train the model

We, and all those who’ve played before us, are responsible for teaching and training Akinator what it knows. This is a very simple example of how machines learn.

Modern Day Examples

Machine learning does pretty well in the case of Akinator, but the more common ways we are starting to use ML requires the ability to process and learn from less structured data and/or much larger search spaces.

  • Natural Language Processing — What is this person saying?
  • Translation — How do I ask for more water in Japanese?
  • Image Recognition — Is that mole cancerous?
  • Business — Is there some exploit in my app that I’m not aware of?

These kinds of problems have shown a lot of progress using a specific type of Machine Learning, Deep Learning. In the chart below, you can see that there are over 1000 use cases in real products at Google that use Deep Learning.

How it could work: Types of Machine Learning

So, how does a computer know Japanese and detect cancer? It comes down to how we train the machine to make accurate judgments and predictions and how we want to frame the problem. The following outlines three common forms of Machine Learning.

Supervised Learning is one of the most popular, where the computer learns from concrete examples. For example, given a dataset of housing prices and house size, you could predict how much a house is likely to cost given the size (among other possible features). Or a set of emails tagged as spam can be used to inform a spam-detector towards deciding whether a future email is spam or not.

If you don’t have examples to learn from, you could use Unsupervised Learning to discover structure within data directly. Of course, this is a bit more challenging, because the data is not necessarily labeled. Unsupervised Learning could use clustering algorithms to group unlabeled data together. Suppose you have the dataset of news stories; you could automatically cluster them, so that stories about the same events are grouped together.

Another common approach to ML is Reinforcement Learning or learning based on outcomes. One of the first self-learning programs (in 1952) played checkers with itself thousands of times and eventually learned the best board positions for winning. It’s a bit like Supervised Learning, but instead of being shown examples of winning strategies, it has to discover them on its own.

Why ML: Types of Problems We Solve with ML

Another way to think about ML is through why you are using a particular approach and what you are trying to accomplish. Here are five ways ML can be applied:

  1. Classification — for predicting non-continuous or discrete categories. Is it a cat? Is it cancerous? Is it spam?
  2. Regression — for predicting floating point or real number values, like the probability that a user clicks on ad or the how much a house is likely to sell for.
  3. Similarity/Anomaly — for retrieving something similar (like searching for similar images) or, given user data, finding behavioral anomalies. Similarity detection applies to recommendation systems, like what movies are in the same genre. Anomaly detection could look for cheats, exploits, and fraud in user behavior.
  4. Ranking — for ordering a set of results by its relevance to a particular input. Basically, anything that involves search results will have some sort of model for ranking.
  5. Sequence Prediction — for predicting the next element in a series of data. What is the next word in this sentence? What’s the next video to show?

If some of these sound like similar problems, it’s because they are. For example, take Classification and Similarity detection; if two images have the same category label, then they are likely similar. Another example would be for Ranking and Regression. You could predict a score for various search results (which is Regression), and then return them in a sorted order (which is Ranking). These are simply theoretical frameworks for how you’d like to think about particular problems or challenges.

What it looks like: Basic Workflow of Akinator

So, what does it look like in action? Here are the three main ingredients and the three general operations for a simple Machine Learning example. The typical symbolic representation of ML is: y = prediction_model (x[i])

  1. Label or Target — the Label or Target represents the y-value or what is being solved for. In the case of Akinator, this is the character that is being predicted.
  2. Features or Input — the Features or Input are the qualities (x[i]) or properties of a given instance, example, or potential Target. For Akinator, these Features are informed by what the user selects for the yes/no questions.
  3. Prediction Model — the Prediction Model. personified as “Akinator, the Web Genius,” is where all the primary ML functionalities happen.

The figure below shows that the Prediction Model returns a result (y: Label or Target) given a number of Inputs or Features (x[i]). Here are the main operations that occur:

  1. Training happens when sets of Features (x) paired with Labels (y) are used to build the Model.
  2. Inference happens when a set of Features become inputs for requesting a prediction. “Given x, what is y?”
  3. Prediction happens when the Model is given Features towards discerning and returning a best guess. “This is y!”
ML Pipeline

Supervised Learning and Reinforcement Learning would be two ways to build a Prediction Model. For Akinator, the intent is to solve a Classification problem for accurately labeling characters and public figures.

Try it for yourself?

  1. Coding: If you are interested in the latest in Machine Learning, here’s a really great introduction on Github:
  2. Reading: And if you really want to nerd out, here’s a more advanced reading on learning algorithms for AI programs like the genius behind Akinator.
  3. Interacting: Here’s a beautiful visualization of Decision-Tree learning, if you just curious to see ML in action.

I’ll leave you with the closing of Alan Turing’s 1950 paper:

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

We can only see a short distance ahead, but we can see plenty there that needs to be done.




ELIZA was a chatbot developed in 1966. The ELIZA Effect is the tendency to unconsciously assume computer behaviors are analogous to human behaviors. Here you’ll find articles on Artificial Intelligence, Machine Learning, Believability, and Procedural Thinking.

Recommended from Medium

Machine learning: Ways to enhance your model development cycle

Introduction to k-Nearest-Neighbors

Hindsight Bias in Machine Learning

How to build your own AI Chatbot on Discord

Optimizing network traffic in a massively parallel real-time bidding environment

Audio AI: isolating instruments from stereo music using Convolutional Neural Networks

Traffic Signs Recognition: CNN Model with Tensorflow Serving

ML Composer: A Machine Learning ft Jetpack Compose Dev Story — Episode 2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sherol Chen

Sherol Chen

AI, Games, and Education

More from Medium

Network Intrusion Detection System using Machine Learning

Effective data cleaning using Cerebra Vision Intelligence

Use previously implemented ML models — ML Practitioner Quick bits

Supervised Learning algorithms for Attack Detection in Cloud Forensics