Image showing a human figure along with some computer code, and some gradient background | machine learnin vs AI
Image showing a human figure along with some computer code, and some gradient background | machine learnin vs AI
Machine Learning Open License — Image Credits: IoT World Today

Artificial Intelligence, Machine Learning

Machine Learning (ML) vs. AI and their Important Differences

Unfortunately, some tech organizations are deceiving customers by proclaiming using artificial intelligence (AI) on their technologies while not being clear about their products’ limits

Roberto Iriondo
Oct 15, 2018 · 13 min read

October 15, 2018, by Roberto Iriondo — Last updated: May 7, 2020

Recently, a report released regarding the misuse from companies claiming to use artificial intelligence [29] [30] on their products and services. According to the Verge [29], 40% of European startups that claimed to use AI don’t use the technology. Last year, TechTalks, also stumbled upon such misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine thousands of users’ data to enhance user experience in their products and services [2] [33].

Unfortunately, there’s still much confusion within the public and the media regarding what truly is artificial intelligence [44], and what truly is machine learning [18]. Often the terms are being used as synonyms, in other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue [2] [31] [32] [45].

Below we go through some main differences between AI and machine learning.

What is machine learning?

Image for post
Image for post
What is Machine Learning | Tom M. Mitchell, Machine Learning, McGraw Hill, 1997 [18]

Quoting Interim Dean at the School of Computer Science at CMU, Professor and Former Chair of the Machine Learning Department at Carnegie Mellon University, Tom M. Mitchell:

A scientific field is best defined by the central question it studies. The field of Machine Learning seeks to answer the question:

“How can we build computer systems that automatically improve with experience, and what
are the fundamental laws that govern all learning processes? [1]”

Machine learning (ML) is a branch of artificial intelligence, and as defined by Computer Scientist and machine learning pioneer [19] Tom M. Mitchell: “Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.” [18] — ML is one of the ways we expect to achieve AI. Machine learning relies on working with small to large datasets by examining and comparing the data to find common patterns and explore nuances.

For instance, if you provide a machine learning model with many songs that you enjoy, along with their corresponding audio statistics (dance-ability, instrumentality, tempo, or genre). It oughts to be able to automate (depending on the supervised machine learning model used) and generate a recommender system [43] as to suggest you with music in the future that (with a high percentage of probability rate) you’ll enjoy, similarly as to what Netflix, Spotify, and other companies do [20] [21] [22].

In a simple example, if you load a machine learning program with a considerable large dataset of x-ray pictures along with their description (symptoms, items to consider, and others), it oughts to have the capacity to assist (or perhaps automatize) the data analysis of x-ray pictures later on. The machine learning model looks at each one of the pictures in the diverse dataset, and find common patterns found in pictures with labels with comparable indications. Furthermore, (assuming that we use an acceptable ML algorithm for images) when you load the model with new pictures, it compares its parameters with the examples it has gathered before to disclose how likely the pictures contain any of the indications it has analyzed previously.

Image for post
Image for post
Supervised Learning (Classification/Regression) | Unsupervised Learning (Clustering) | Credits: Western Digital [13]

The type of machine learning from our previous example, called “supervised learning,” where supervised learning algorithms try to model relationship and dependencies between the target prediction output and the input features, such that we can predict the output values for new data based on those relationships, which it has learned from previous datasets [15] fed.

Unsupervised learning, another type of machine learning are the family of machine learning algorithms, which have main uses in pattern detection and descriptive modeling. These algorithms do not have output categories or labels on the data (the model trains with unlabeled data).

Image for post
Reinforcement Learning | Credits: Types of ML Algorithms you Should Know by David Fumo [3]

Reinforcement learning, the third popular type of machine learning, aims at using observations gathered from the interaction with its environment to take actions that would maximize the reward or minimize the risk. In this case, the reinforcement learning algorithm (called the agent) continuously learns from its environment using iteration. A great example of reinforcement learning are computers reaching super-human state and beating humans on computer games [3].

Machine learning is mesmerizing, particularly its advanced sub-branches, i.e., deep learning and the various types of neural networks. In any case, it is “magic” (Computational Learning Theory) [16], regardless of whether the public, at times, has issues observing its internal workings. While some tend to compare deep learning and neural networks to the way the human brain works, there are essential differences between the two [2] [4] [46].

What is Artificial Intelligence (AI)?

Image for post
Image for post
The AI Stack, Explained by Professor and Dean, School of Computer Science, Carnegie Mellon University, Andrew Moore | Youtube [14]

Artificial intelligence, on the other hand, is vast in scope. According to Andrew Moore [6] [36] [47], Former-Dean of the School of Computer Science at Carnegie Mellon University, “Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

That is a great way to define AI in a single sentence; however, it still shows how broad and vague the field is. Fifty years ago, a chess-playing program was considered as a form of AI [34], since game theory, along with game strategies, were capabilities that only a human brain could perform. Nowadays, a chess game is dull and antiquated since it is part of almost every computer’s operating system (OS) [35]; therefore, “until recently” is something that progresses with time [36].

Assistant Professor and Researcher at CMU, Zachary Lipton clarifies on Approximately Correct [7], the term AI “is aspirational, a moving target based on those capabilities that humans possess but which machines do not.” AI also includes a considerable measure of technology advances that we know. Machine learning is only one of them. Prior works of AI utilized different techniques, for instance, Deep Blue, the AI that defeated the world’s chess champion in 1997, used a method called tree search algorithms [8] to evaluate millions of moves at every turn [2] [37] [52] [53].

Image for post
Image for post
Example of solving the Eight Queens puzzle using Depth First Search | Introduction to Artificial Intelligence |. how2Examples

AI, as we know it today, is symbolized with Human-AI interaction gadgets by Google Home, Siri, and Alexa, by the machine learning powered video prediction systems that power Netflix, Amazon, and YouTube. These technological advancements are progressively becoming essential in our daily lives. They are intelligent assistants that enhance our abilities as humans and professionals — making us more and more productive.

In contrast to machine learning, AI is a moving target [51], and its definition changes as its related technological advancements turn out to be further developed [7]. Possibly, within a few decades, today’s innovative AI advancements ought to be considered as dull as flip-phones are to us right now.

Why do tech companies tend to use AI and ML interchangeably?

Image for post
Image for post
“… what we want is a machine that can learn from experience.” ~ Alan Turing

The term “artificial intelligence” came to inception in 1956 by a group of researchers, including Allen Newell and Herbert A. Simon [9], Since then, AI’s industry has gone through many fluctuations. In the early decades, there was much hype surrounding the industry, and many scientists concurred that human-level AI was just around the corner. However, undelivered assertions caused a general disenchantment with the industry along with the public and led to the AI winter, a period where funding and interest in the field subsided considerably [2] [38] [39] [48].

Afterward, organizations attempted to separate themselves with the term AI, which had become synonymous with unsubstantiated hype and utilized different terms to refer to their work. For instance, IBM described Deep Blue as a supercomputer and explicitly stated that it did not use artificial intelligence [10], while it did [23].

During this period, a variety of other terms, such as big data, predictive analytics, and machine learning, started gaining traction and popularity [40]. In 2012, machine learning, deep learning, and neural networks made great strides and found use in a growing number of fields. Organizations suddenly started to use the terms of machine learning and deep learning for advertising their products [41].

Deep learning began to perform tasks that were impossible to do with classic rule-based programming. Fields such as speech and face recognition, image classification and natural language processing, which were at early stages, suddenly took great leaps [2] [24] [49], and on March 2019–three the most recognized deep learning pioneers won a Turing award thanks to their contributions and breakthroughs that have made deep neural networks a critical component to nowadays computing [42].

Hence, to the momentum, we see a gearshift back to AI. For those who used to the limits of old-fashioned software, the effects of deep learning almost seemed as “magic” [16]. Especially since a fraction of the fields that neural networks and deep learning are entering were considered off-limits for computers, and nowadays machine learning and deep learning engineers are earning high-level salaries, even when they are working at non-profit organizations, which speaks to how hot the field is [50] [11].

Image for post
Image for post
Source: Twitter | GPT-2 Better Language Models and Their Implications, Open AI

Sadly, this is something that media companies often report without profound examination and frequently go along AI articles with pictures of crystal balls, and other supernatural portrayals. Such deception helps those companies generate hype around their offerings [27]. Yet, down the road, as they fail to meet the expectations, these organizations are forced to hire humans to make up for the shortcomings of their so-called AI [12]. In the end, they might end up causing mistrust in the field and trigger another AI winter for the sake of short-term gains [2] [28].

I am always open to feedback, please share in the comments if you see something that may need revisited. Thank you for reading!

Acknowledgments:

The author would like to extensively thank Ben Dickson, Software Engineer and Tech Blogger for his kindness as to allow me to rely on his expertise and storytelling, along with several members of the AI Community for the immense support and constructive criticism in preparation of this article.

DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University, nor other companies (directly or indirectly) associated with the author(s). These writings do not intend to be final products, yet rather a reflection of current thinking, along with being a catalyst for discussion and improvement.


You can find me on my website, Medium, Instagram, Twitter, Facebook, LinkedIn, or through my SEO company.



References:

[1] The Discipline of Machine learning | Tom M. Mitchell | http://www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf

[2] Why the difference between AI and machine learning matters | Ben Dickson | TechTalks | https://bdtechtalks.com/2018/10/08/artificial-intelligence-vs-machine-learning/

[3] Types of Machine Learning Algorithms You Should Know | David Fumo | Towards Data Science | https://towardsdatascience.com/types-of-machine-learning-algorithms-you-should-know-953a08248861

[4] Watch our AI system play against five of the world’s top Dota 2 Professionals | Open AI | https://openai.com/five/

[5] Differences between Neural Networks and Deep Learning | Quora | https://www.quora.com/What-is-the-difference-between-Neural-Networks-and-Deep-Learning

[6] What Machine Learning Can and Cannot Do | WSJ | https://blogs.wsj.com/cio/2018/07/27/what-machine-learning-can-and-cannot-do/

[7] Carnegie Mellon Dean of Computer Science on the Future of AI | Forbes | https://www.forbes.com/sites/peterhigh/2017/10/30/carnegie-mellon-dean-of-computer-science-on-the-future-of-ai

[8] From AI to Ml to AI: On Swirling Nomenclature & Slurried Thought | Zachary C. Lipton | Approximately Correct | http://approximatelycorrect.com/2018/06/05/ai-ml-ai-swirling-nomenclature-slurried-thought/

[9] Tree Search Algorithms | Introduction to AI | http://how2examples.com/artificial-intelligence/tree-search

[10] Reinventing Education Based on Data and What Works, Since 1955 | Carnegie Mellon University | https://www.cmu.edu/simon/what-is-simon/history.html

[11] Does Deep-Blue use AI? | Richard E. Korf | University of California | https://www.aaai.org/Papers/Workshops/1997/WS-97-04/WS97-04-001.pdf

[12] Artificial Intelligence: Salaries Heading Skyward | Stacy Stanford | Machine Learning Memoirs | https://medium.com/mlmemoirs/artificial-intelligence-salaries-heading-skyward-e41b2a7bba7d

[13] The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work | The Guardian | https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies

[14] Simplify Machine Learning Pipeline Analysis with Object Storage | Western Digital | https://blog.westerndigital.com/machine-learning-pipeline-object-storage/

[15] Dr. Andrew Moore Opening Keynote | Artificial Intelligence and Global Security Initiative | https://youtu.be/r-zXI-DltT8

[16] The 50 Best Public Datasets for Machine Learning | Stacy Stanford | https://medium.com/datadriveninvestor/the-50-best-public-datasets-for-machine-learning-d80e9f030279

[17] Computational Learning Theory | ACL | http://www.learningtheory.org/

[18] Machine Learning Definition | Tom M. Mitchell| McGraw-Hill Science/Engineering/Math; (March 1, 1997), Page 1 | http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/mlbook.html

[19] For pioneering contributions and leadership in the methods and applications of machine learning. | “Prof. Tom M. Mitchell”. National Academy of Engineering. Retrieved October 2, 2011.

[20] Recommender System | Wikipedia | https://en.wikipedia.org/wiki/Recommender_system

[21] Spotify’s “This Is” playlists: the ultimate song analysis for 50 mainstream artists | James Le | https://towardsdatascience.com/spotifys-this-is-playlists-the-ultimate-song-analysis-for-50-mainstream-artists-c569e41f8118

[22] How recommender systems make their suggestions | Bibblio | https://medium.com/the-graph/how-recommender-systems-make-their-suggestions-da6658029b76

[23] Deep Blue | Science Direct Assets | https://www.sciencedirect.com/science/article/pii/S0004370201001291

[24] 4 great leaps machine learning made in 2015 | Sergar Yegulalp | https://www.infoworld.com/article/3017250/4-great-leaps-machine-learning-made-in-2015.html

[25] Limitations of Deep Learning in AI Research | Roberto Iriondo | Towards Data Science | https://towardsdatascience.com/limitations-of-deep-learning-in-ai-research-5eed166a4205

[26] Forty percent of ‘AI startups’ in Europe don’t use AI, claims report | The Verge | https://www.theverge.com/2019/3/5/18251326/ai-startups-europe-fake-40-percent-mmc-report

[27] This smart toothbrush claims to have its very own ‘embedded AI’ | The Verge | https://www.theverge.com/circuitbreaker/2017/1/4/14164206/smart-toothbrush-ara-ai-kolibree

[28] The Coming AI Autumn | Jeffrey P. Bigham | http://jeffreybigham.com/blog/2019/the-coming-ai-autumnn.html

[29] Forty percent of ‘AI startups’ in Europe don’t use AI, claims report | The Verge | https://www.theverge.com/2019/3/5/18251326/ai-startups-europe-fake-40-percent-mmc-report

[30] The State of AI: Divergence | MMC Ventures | https://www.mmcventures.com/wp-content/uploads/2019/02/The-State-of-AI-2019-Divergence.pdf

[31] Top Sales & Marketing Priorities for 2019: AI and Big Data, Revealed by Survey of 600+ Sales Professionals | Business Wire | https://www.businesswire.com/news/home/20190129005560/en/Top-Sales-Marketing-Priorities-2019-AI-Big

[32] Artificial Intelligence Beats the Hype With Stunning Growth | Forbes | https://www.forbes.com/sites/jonmarkman/2019/02/26/artificial-intelligence-beats-the-hype-with-stunning-growth/#4e8507391f15

[33] Misuse of AI can destroy customer loyalty: here’s how to get it right | Compare the Cloud | https://www.comparethecloud.net/articles/misuse-of-ai-can-destroy-customer-loyalty-heres-how-to-get-it-right/

[34] Timeline of Artificial Intelligence | Wikipedia | https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence#1950s

[35] Computer Chess | Wikipedia | https://en.wikipedia.org/wiki/Computer_chess

[36] Artificial Intelligence at Carnegie Mellon University |Machine Learning Department at Carnegie Mellon University | https://www.youtube.com/watch?v=HH-FPH0vpVE

[37] Search Control Methods in Deep Blue | Semantic Scholar | https://pdfs.semanticscholar.org/211d/7268093b4dfce8201e8da321201c6cd349ef.pdf

[38] Is Winter Coming? | University of California, Berkeley | https://pha.berkeley.edu/2018/12/01/is-winter-coming-artificial-intelligence-in-healthcare/

[39] AI Winter | Wikipedia | https://en.wikipedia.org/wiki/AI_winter

[40] A Very Short History of Data Science | Forbes | https://www.forbes.com/sites/gilpress/2013/05/28/a-very-short-history-of-data-science/#3c828f2055cf

[41] Deep Learning Revolution | Wikipedia | https://en.wikipedia.org/wiki/Deep_learning#Deep_learning_revolution

[42] Turing Award Winners 2018 | ACM | https://amturing.acm.org/byyear.cfm

[43] Recommender System | Wikipedia | https://en.wikipedia.org/wiki/Recommender_system

[44] The discourse is unhinged: how the media gets AI alarmingly wrong | The Guardian | https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong

[45] Retailers moved from AI hype to reality in 2018 | iXtenso | https://ixtenso.com/technology/retailers-moved-from-ai-hype-to-reality-in-2018.html

[46] Deep Learning & The Human Brain, Inspiration not Imitation | Imaginea | https://www.imaginea.com/sites/deep-learning-human-brain-inspiration-not-imitation/

[47] Carnegie Mellon Dean of Computer Science on the Future of AI | Forbes | https://www.forbes.com/sites/peterhigh/2017/10/30/carnegie-mellon-dean-of-computer-science-on-the-future-of-ai/#164487aa2197

[48] History of AI Winters | Actuaries Digital | https://www.actuaries.digital/2018/09/05/history-of-ai-winters/

[49] Recent Advances in Deep Learning: An Overview | Arxiv | https://arxiv.org/pdf/1807.08169.pdf

[50] Tech Giants Are Paying Huge Salaries for Scarce AI Talent | New York Times | https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html

[51] Artificial Intelligence is a Moving Target | AnswerRocket | https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html

[52] Search Control Methods in Deep Blue | Semantic Scholar | https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html

[53] Search Tree Algorithms on Deep Blue | Stanford University | http://stanford.edu/~cpiech/cs221/apps/deepBlue.html DDI

Towards AI — Multidisciplinary Science Journal

The Best of Tech, Science and Engineering.

Sign up for Towards AI Newsletter

By Towards AI — Multidisciplinary Science Journal

Towards AI publishes the best of tech, science, and engineering. Subscribe with us to receive our newsletter right on your inbox. Take a look

Create a free Medium account to get Towards AI Newsletter in your inbox.

Roberto Iriondo

Written by

Front End Engineer @mldcmu | For Authors @towards_ai → https://towardsai.net/contribute | 🌎→ https://towardsai.net/@robiriondo | Views & opinions are my own.

Towards AI — Multidisciplinary Science Journal

Towards AI is a world’s leading multidisciplinary science journal. Towards AI publishes the best of tech, science, and engineering. Read by thought-leaders and decision-makers around the world.

Roberto Iriondo

Written by

Front End Engineer @mldcmu | For Authors @towards_ai → https://towardsai.net/contribute | 🌎→ https://towardsai.net/@robiriondo | Views & opinions are my own.

Towards AI — Multidisciplinary Science Journal

Towards AI is a world’s leading multidisciplinary science journal. Towards AI publishes the best of tech, science, and engineering. Read by thought-leaders and decision-makers around the world.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store