Image for post
Image for post

5 Busted Myths that Help You Better Understand AI

Apr 3, 2019 · 15 min read

A few weeks ago, a dispiriting headline popped up across the web: “Nearly Half Of All ‘AI Startups’ Are Cashing In On Hype” wrote Forbes, “40% of ‘AI startups’ in Europe don’t actually use AI” published The Verge.

The claim was extracted from a statement by David Kelnar, head of research at MMC Ventures, which had just released an encompassing report called The State of AI 2019: Divergence.

The 150 pages report features extensive and in-depth observations about the current evolution stage of artificial intelligence with its many ramifications and nuances. Instead of reflecting the balanced overview the report provides, these particular articles focused on a single statement from the report:

“We individually reviewed the activities, focus and funding of 2,830 purported AI startups in the 13 EU countries most active in AI — Austria, Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Norway, Portugal, Spain, Sweden and the United Kingdom. Together, these countries also comprise nearly 90% of EU GDP. In approximately 60% of the cases — 1,580 companies — there was evidence of AI material to a company’s value proposition.”

Source: The State of AI 2019: Divergence

We believe unjustly amplifying a single aspect of an otherwise complex picture leads to additional distortions to an already challenging concept to understand.

That’s why we wanted to address some of the key misconceptions related to artificial intelligence, hoping to use our own experience to clarify things.

Why it’s important to set healthy landmarks

Talking about AI is not only important for our future but it is sorely necessary.

Beyond the hype, the Sci-Fi-inspired scenarios, and the flashy headlines, we need to clarify a few aspects:

As a team working with artificial intelligence, democratizing AI is one of our core principles. For us, this also involves helping others understand this technology in both its technical and non-technical aspects.

It’s essential we gain clarity around these concepts because AI has reached a tipping point. Its impact is increasingly visible. The numbers around it are growing exponentially.

For example, according to The State of AI 2019: Divergence report:

Image for post
Image for post

The transformational opportunities this technology offers will shake the very foundations of our societies, creating a massive volume of opportunities and growth and posing equally huge challenges.

To be able to develop AI that empowers growth and evolution, we must first understand it and continue learning about it as it changes and develops.

Healthy landmarks and an accurate self-education about AI leads to:

We believe it’s necessary to demystify this technology and to “translate” the abstract principles that govern it so everyone can understand what’s going on. People fear what they don’t know and fear never leads to good decisions.

For example, we’d like to help non-technical specialists read statements like these from a perspective of potential for growth and not the “robots will take our jobs” cliche.

“AI technology is important because it enables human capabilities — understanding, reasoning, planning, communication and perception — to be undertaken by software increasingly effectively, efficiently and at low cost.”

It’s not humans vs AI. It’s a matter of collective intelligence where technology complements human expertise, creativity, and judgment.

Now it’s time to get practical.

What is Artificial Intelligence

A less-than-final but useful definition

Andreas Kaplan and Michael Haenlein from ESCP Europe Business School define AI as:

“a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”

By no means is this a perfect definition. In fact, as Kaplan and Haenlein themselves admit:

“AI is still a surprisingly fuzzy concept and a lot of questions surrounding it are still open.”

For example, you’ll read articles where AI, machine learning (ML) and deep learning (DL) are used interchangeably. This is incorrect, as both ML and DL are subsets of artificial intelligence.

all ML = AI BUT all AI ≠ ML

all DL = ML BUT all ML ≠ DL

Image for post
Image for post

If you’re not as familiar with AI as people who are deeply involved in it are, you may be surprised to know that the term artificial intelligence is over 60 years old!

The term was invented in 1956 by assistant professor John McCarthy from the University of Dartmouth. He used it to describe the ample concept of hardware or software that exhibit behavior which appears to be intelligent.

Nowadays, the term preserves its breadth. However, advances in research and development have defined some of its aspects which makes it easier to grasp them.

For instance:

Machine learning enables programs to learn through training, instead of being programmed with rules. By processing training data, machine learning systems provide results that improve with experience.”

Source: The State of AI 2019: Divergence

Real-life ML applications include predicting churn or forecasting fraud in credit card transactions.

When it comes to deep learning, you should know it’s often used to work out specific issues. For example:

“Deep learning is valuable because it transfers an additional burden — the process of feature extraction — from the programmer to their program.”

Source: The State of AI 2019: Divergence

In practice, deep learning aims to emulate the brain, whether animal or human. The objective is to train these networks of artificial neurons to extract features from data sets that they can later use to optimize processes and solve problems.

Whether we’re talking about AI subsets such as ML and DL (acronyms galore, we know), the objective is to develop technology that can learn through practice.

This is one step further than current rules-based systems which have inherent limitations but it’s nowhere near the onset of technological singularity.

Image for post
Image for post

Where unrealistic expectations about AI come from

The interesting thing about AI — one of many — is that it simultaneously exceeds and falls behind our expectations of it.

Our unrealistic assumptions about AI stem from a series of factors, including:

A combination of these factors leads to a crooked picture of what artificial intelligence is, does, and can become.

Understanding AI is a matter of calibrating our hopes and dreams based on factual knowledge. You can get a firm grasp of artificial intelligence development without being highly technical. It’s actually one of the things we’re trying to accomplish here.

The truth is:

“The capabilities of AI systems have reached a tipping point due to the confluence of seven factors:

* new algorithms;

* the availability of training data;

* specialised hardware;

* cloud AI services;

* open source software resources;

* greater investment;

* and increased interest.”

“A virtuous cycle has developed. Progress in AI is attracting investment, entrepreneurship and interest. These, in turn, are accelerating progress.”

Source: The State of AI 2019: Divergence

AI is misused and misunderstood as a term because:

Incumbents like Google, Apple or Microsoft and legitimate and illegitimate challengers are currently trying to make headway in this exciting playing field. They do it because the stakes are higher than in any other form of tech, including blockchain and IoT:

AI may be the fastest paradigm shift in technology history.In the course of three years, the proportion of enterprises with AI initiatives will have grown from one in 25 to one in three. Adoption has been enabled by the prior paradigm shift to cloud computing, the availability of plug-and-play AI services from global technology vendors and a thriving ecosystem of AI-led software suppliers.”

Source: The State of AI 2019: Divergence

Image for post
Image for post

AI misconceptions you can let go of right now

People in tech and in the business world have great expectations for AI. We’ve been aboard the same train for over a year, since we started building MorphL. It’s precisely this enthusiasm and desire to build helpful AI applications that drives us to do some serious myth-busting.

Our desire is to help increase artificial intelligence adoption in a balanced, responsible manner. This is one of the ways we follow through on this ambition.

1. AI will soon have the ability to think like humans

At this moment, artificial intelligence is already yielding great benefits for those who integrate it into their technology stack.

Benefits include:

It’s true that artificial intelligence technology has made significant progress but it’s still nowhere close to human-level capabilities. Here’s a quick example from one of the biggest companies in the world:

“Microsoft reported that its speech recognition system achieved human-level recognition for the first time in history. Improvements are continuing.”

Source: The State of AI 2019: Divergence

Andrew Ng, the most renowned AI scientist in the world, also highlights this when explaining what happens to algorithms once they reach human-level performance on a specific task. It turns out that their progress and accuracy slow down after that point and eventually plateau.

There’s still a long way to go from recognition to being able to make complicated decisions and inferences, so the robot overlords are not here yet. 🙂

However, what Andrew Ng and other AI specialists propose a less polarizing way to look at this:

“If machine learning can automate the mundane and routine things in our world, then we can lean into uniquely human strengths.”

This entire video is a fascinating collection of case studies which highlight how we’re currently benefiting from AI technology.

2. AI works with uncompromised objectivity

As we’ve seen in recent years, technology reflects the shortcomings of human nature. Our vulnerabilities, biases, desires and fears surface in data breaches, social media propaganda and various other examples.

The same can happen with AI if developers and decision-makers don’t make a conscious effort to override their biases while engineering this tech.

It’s no secret that:

“Biased systems could increase inequality. Data used to train AI systems reflects historic biases, including those of gender and race. Biased AI systems could cause individuals economic loss, loss of opportunity and social stigmatisation.”

Source: The State of AI 2019: Divergence

To recognize and remove bias from AI systems is a big responsibility, one that we fully embrace at MorphL. We also acknowledge that one way to ensure this happens is to create transparency through open-source implementations. As a result, accountability to users and the general public will enforce this key principle in our work and other specialists’.

“New, open standards and governance frameworks will boost consumers’ and regulators’ confidence that model-driven decisions are accurate, explainable, and free from bias — accelerating the rate of AI adoption on an industrial scale.”

Source: The State of AI 2019: Divergence

When algorithms are tasked with decisions that affect human life, we must ensure we don’t “automate inequality” along the way.

Ethical considerations and the diversity specific to AI users must be integrated into the core of any system and application this technology supports.

This is also work in progress.

3. AI already has human-level self-teaching capabilities

We now have autonomous cars that can recognize the world that surrounds them and make decisions based on those inferences.

AI-fueled medtech can now identify tumors in medical imagery.

Voice-controlled devices provide us with seamless experiences that we never envisioned 15 years ago.

Deep learning has made this all possible and more! However, these artificial neural networks, sophisticated as they may be, are only able to approximate a human brain’s functions. Its competence is still limited.

Exacerbating AI’s development instead of being pragmatic about it will only exacerbate the unrealistic expectations we mentioned earlier.

The current reality of artificial intelligence is somewhere between these two facts:

AI will help save up to $16bn by 2026 in the medical sector by reducing medication dosage errors.

Source: Artificial Intelligence (AI): healthcare’s new nervous system by Accenture


75% of Netflix users select films recommended to them by the company’s AI algorithms (2014).

Source: Mohammad Sabah, Netflix Senior Data Scientist in a statement published on Gigaom

4. AI is so advanced we’re getting close to singularity

By now you’re probably convinced that we’re really a long way from technological singularity. If not, then keep reading.

The singularity hypothesis states that AI can develop into artificial superintelligence (ASI) capable of triggering uncontrollable, self-replicating technological evolution that will fundamentally change the world.

Simply put, not only could AI become self-aware but it could also develop the ability to build other AI systems with similar capacities.

The reason this scenario is so improbable over the foreseeable future is that AI development depends on large volumes of data for training.

“The creation and availability of data has grown exponentially in recent years, enabling AI. Today, humanity produces 2.5 exabytes (2,500 million gigabytes) of data daily (Google). 90% of all data has been created in the last 24 months (SINTEF).”

Source: The State of AI 2019: Divergence

In spite of this wealth of data we’re producing, it’s not nearly enough to make the sudden jump to artificial superintelligence.

We may have gone from documents and transactional data all the way to metadata collected by sensors but the latter has only been happening for a couple of years. Data production rates are bound to increase exponentially, thus fueling AI evolution, but it’s unlikely to suffice for bridging the gap to ASI.

Image for post
Image for post

Besides data, organizations also require massive financial resources to build towards artificial superintelligence. That and the right hardware to support this colossal need for processing power.

The report that inspired this article indicates fast progress but fails to suggest that we might be on the brink of ASI:

“Investment dollars into early stage AI companies globally have increased fifteen-fold in five years, to an estimated $15bn in 2018. (CB Insights, MMC Ventures)”

“In 2019, as well as enabling next generation AI in the cloud, custom silicon will transform AI at the edge by coupling high performance with low power consumption and small size.”

Source: The State of AI 2019: Divergence

5. AI has made sudden progress

It’s been over 60 years since AI research began. From unsophisticated systems to current applications, there’s been a huge deal of improvement. Nonetheless, 60 years is not exactly… sudden.

“Since its inception in the 1950s, AI research has focused on five fields of enquiry:

1. Knowledge: The ability to represent knowledge about the world.

2. Reasoning: The ability to solve problems through logical reasoning.

3. Planning: The ability to set and achieve goals.

4. Communication: The ability to understand written and spoken language.

5. Perception: The ability to make deductions about the world based on sensory input.”

Source: The State of AI 2019: Divergence

In the last couple of years, a series of key factors have accelerated AI development, hence the false impression of swift progress.

More data, better hardware, open-source frameworks, and sizeable investments contributed to current AI systems and applications. These achievements rest on the shoulders of generations of specialists who worked extremely hard to advance AI layer by layer.

In terms of perception and adoption, artificial intelligence is now crossing the chasm from early adopters to the early and even late majority, which is an indication of it going mainstream.

Image for post
Image for post

We can expect this to be a powerful catalyst for even more vigorous growth in every aspect of this tech sector. At the same time, it’s important that we never forget that progress and big achievements are always incremental, in spite of what it may look like when reflected in the media.

How MorphL is creating accountability and transparency

We wanted to address these AI myths and help debunk them for anyone who’s interested because we believe progress can be influenced by them, both for the better and for the worse.

We are fully invested in following through on our guiding principles to ensure our work contributes to AI progress in a way that’s ethical, fair, and accountable.

Because we’re building AI to solve complex problems, the technology itself can’t be oversimplified if it’s to address and solve sophisticated issues.

As Cassie Kozyrkov, Chief Decision Intelligence Engineer at Google, puts it:

“Simple solutions don’t work for tasks that need complicated solutions. So AI comes to the rescue with — surprise! — complicated solutions.”

However, it’s part of our mission to ensure the AI technology we build is transparent in the sense of being explainable.

Practical experience has taught us that trust in AI hinges both on thorough testing and on our ability to explain its decision-making process.

Showing rather than telling has always been more effective for building confidence with humans. So in order to provide non-AI specialists and enthusiasts with clear and reliable information about this side of tech world, we’re doing two things:

We’d love to know if this article helped clear some of the confusion around AI and AI-related concepts. Plus, if you have any suggestions of what might make useful additions, we’re just a comment away!

Originally published at on April 3, 2019.


Applied AI/ML for eCommerce Automation (Techstars ‘19).


Written by

Co-founder & CEO at MorphL (Techstars ’19) | On a mission to helping 10,000 companies with AI adoption | Love to play 🎾, squash and 🏓. #givefirst



Applied AI/ML for eCommerce Automation (Techstars ‘19). More details at


Written by

Co-founder & CEO at MorphL (Techstars ’19) | On a mission to helping 10,000 companies with AI adoption | Love to play 🎾, squash and 🏓. #givefirst



Applied AI/ML for eCommerce Automation (Techstars ‘19). More details at

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store