An Honest Guide to Machine Learning: Part One

Axiom Zen
Axiom Zen Team
Published in
8 min readSep 12, 2016

Understanding Machine Learning

The Honest Guide to Machine Learning provides a deep dive into machine learning technology — no math necessary.

In Part 1 of our guide, we give you the foundational tools necessary to make sense of machine learning.

Part 2 of our Honest Guide to Machine Learning
Part 3 of our Honest Guide to Machine Learning

One investor joked that adding AI to the end of your startup’s name would increase investment by 10% — and while he was exaggerating, the truth is that artificial intelligence is on everyone’s minds.

What exactly do we mean when we talk about artificial intelligence? It doesn’t necessarily mean a computer that thinks exactly like a human — although that is the end goal of most AI research. Artificial intelligence is a smart agent that has input from sensors and, based on incoming signals, internal knowledge, and some decision making systems, can decide on its next action. Fulfilling this simple goal resulted in a vast area of research and development, answering questions like:

  • How should decision making work?
  • Is randomness important for decision making?
  • Where does knowledge come from?
  • How should these smart agents talk to each other?

Currently, researchers are using machine learning to tackle the third problem: knowledge. This can be done by mapping input signals to human added knowledge, or by automatically collecting knowledge from different sources of input.

When we talk about artificial intelligence that can solve only one problem, we call it narrow AI. Narrow AI is in our phones; it’s in our messaging services; it’s trading stocks, designing websites, and making even basic programs run more smoothly. Machine learning is the path we’re taking to try to achieve AI, and it’s becoming harder and harder to design software without using some form of machine learning. This is leaving a generation of engineers and developers without the tools they need to excel in their field.

While there are plenty of guides out there to machine learning — introductions, how tos, 101s — the problem is that even “simple guides” bombard readers with advanced mathematical formulas. So what do you do if you want to learn how to get into machine learning, but you don’t have a math degree?

This is where we come in. We want to create a guide to machine learning without the math, and without the magic. Over the next few weeks, we’ll be detailing a journey from layperson to expert with An Honest Guide to Machine Learning. Learn the ins and outs, the whys and wherefores, and do it all without once breaking out your calculator.

Today we’re starting things off nice and simple: with a basic history of what machine learning is, and why we’re all so gungho about it.

The Typical Machine Learning Paper

Welcome to most machine learning papers. The cute language and casual tone do nothing to disguise the info dump of formulas. That terrifying figure above? It’s explaining the basic idea of a machine learning technique called Support Vector Machines, and it can be summed up in two thoughts: “Do not completely trust your data because of the noise, and do your best to generalize your model to cover future data.”

An Old Dream

The dream of creating artificial intelligence is nothing new. Though we have long feared the possibility of dangerous AI run wild (“I’m sorry, Dave, I’m afraid I can’t do that.”), we also dream of benevolent computers, something that will love us, and that we will love in return (Her). And so the community of artificial intelligence research was born.

Ask Your Doctor

Our first attempt to mimic human intelligence in a computerized form involved copying the human brain. We wanted to see if we could make an artificial version, which would fire the same way neurons fire in the brain.

We ran into two problems with this branch of research: the first was that the human brain was simply too complicated. With 100 billion brain cells linking to tens of thousands of possible connections, the computational power required to mimic that flow was simply beyond us. It couldn’t be done.

The second problem was the limitations in our understanding of how the brain works. We still have so much to learn about how our minds operate — how can we copy something we so imperfectly understand?

Ask Your Math Teacher

The next stage in AI research involved, for the most part, eliminating the attempt to recreate the brain. Instead it broke down a brain as if it were a computer, and attempted to model its behaviour. Input became sight, sound, smell, and touch, and once they passed through the brain into signals, they became output: decisions, communication skills, and categorizations.

This model was very popular, but far too complicated. Trying to find one algorithm that could understand sight (categorization) and sound (natural language processing), for instance, seemed like a pipe dream.

(Let’s play a game — how many researchers do you see in the image above? Not everyone made the jump from simulating the brain to mathematical modelling. Pay attention for later.)

Divided By Input

So the different branches of AI were born, and began to change the way we looked at AI. There are still “purists” who feel that narrow AI is not AI at all, and many a cerebral argument has been had on the differences between narrow and general AI, and what the definition of intelligence really is.

This was a great step in the right direction, but input can be narrowed into only five or six basic sources — text, audio, and visual (we have yet to conquer the input of scent or touch). A new way to narrow the focus of problems was needed.

Divided by Output

So instead of dividing by input, we began dividing by output. This made it much easier to create focus, narrow solutions. Instead of focusing on the input of vision, for instance, we could divide into focusing on facial recognition, or focusing on finding similar images.

This was much more effective, and gave us excellent narrow AI. Technology like this is still being used by Google’s search engines, for instance, or Facebook’s tagging algorithms.

The problem is that it doesn’t scale up: applying it to anything outside of the incredibly narrow focus you’ve defined doesn’t work. If I teach an algorithm to tell the difference between a cat and a dog, and then I show it a picture of a tree? It will classify it as either a cat or dog, because it doesn’t know there are any other options. This was clearly not the way of the future.

Classic AI: Rule Based Solutions

Some researchers tried to solve this using rule-based solutions, sometimes called Classic or Symbolic AI. To create classic AI, researchers create a list of logical rules, and a reasoning system to draw conclusions. The problem is that writing a rule is not trivial — it’s incredibly time consuming, and sometimes it isn’t clear what is true and what isn’t, which can result in arguments about what is considered a fact. While it “works,” it isn’t sustainable as a commercial solution to all of our problems.

Ask Your Statistics Teacher

Other researchers used statistics and pattern recognition to let computers find the rules they need. Called Statistical Machine Learning, this is where giant chalkboards full of mathematical formulas come from. Looks pretty impressive, no? The problem is that once you’ve found the formula, it still needs to be optimized — this is a step, but not an end point. So, where do you head?

Back to your math teacher to optimize the formula.

This is where we’ve landed today — we use the objective function to extract patterns, and then we optimize the objective function. This has both strengths and weaknesses. It’s one of the best solutions we’ve found, but it’s still imperfect. Optimization isn’t always possible, and getting enough input data can be a real challenge! All those articles about sample bias corrupting machine learning? That’s because there simply isn’t enough input data to eliminate any, even stuff that’s biased.

Unfortunately, our computers still don’t have the computational power necessary to work as quickly and efficiently as we want them to, and while we’re now making amazing strides at solving the small tasks — that tree that was being categorized as a dog or a cat? That’s still a problem. Our incredible work into narrow AI has nothing little (or nothing) to help us solve the problem of creating a general (true) artificial intelligence.

The End of the Beginning

Hopefully you now understand more about artificial intelligence, its connection to machine learning, and the different approaches to machine learning. Throughout the rest of the series we’ll talk about high level information machine learning tasks, and techniques for each type of input.

Part 2 of our Honest Guide to Machine Learning
Part 3 of our Honest Guide to Machine Learning

Written by Ramtin Seraj and Wren Handman

Want to do more than read about AI and Machine Learning?
Axiom Zen is always hiring!

--

--

Axiom Zen
Axiom Zen Team

Axiom Zen is a venture studio. We build startups both independently and in partnership with industry leaders. Follow our publication at medium.com/axiom-zen