5 Busted Myths that Help You Better Understand AI

Ciprian BORODESCU
MorphL
15 min readApr 3, 2019

--

A few weeks ago, a dispiriting headline popped up across the web: “Nearly Half Of All ‘AI Startups’ Are Cashing In On Hype” wrote Forbes, “40% of ‘AI startups’ in Europe don’t actually use AI” published The Verge.

The claim was extracted from a statement by David Kelnar, head of research at MMC Ventures, which had just released an encompassing report called The State of AI 2019: Divergence.

The 150 pages report features extensive and in-depth observations about the current evolution stage of artificial intelligence with its many ramifications and nuances. Instead of reflecting the balanced overview the report provides, these particular articles focused on a single statement from the report:

“We individually reviewed the activities, focus and funding of 2,830 purported AI startups in the 13 EU countries most active in AI — Austria, Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Norway, Portugal, Spain, Sweden and the United Kingdom. Together, these countries also comprise nearly 90% of EU GDP. In approximately 60% of the cases — 1,580 companies — there was evidence of AI material to a company’s value proposition.”

Source: The State of AI 2019: Divergence

We believe unjustly amplifying a single aspect of an otherwise complex picture leads to additional distortions to an already challenging concept to understand.

That’s why we wanted to address some of the key misconceptions related to artificial intelligence, hoping to use our own experience to clarify things.

Why it’s important to set healthy landmarks

Talking about AI is not only important for our future but it is sorely necessary.

Beyond the hype, the Sci-Fi-inspired scenarios, and the flashy headlines, we need to clarify a few aspects:

  • what AI truly is
  • what it can do
  • and what its development means for every one of us.

As a team working with artificial intelligence, democratizing AI is one of our core principles. For us, this also involves helping others understand this technology in both its technical and non-technical aspects.

It’s essential we gain clarity around these concepts because AI has reached a tipping point. Its impact is increasingly visible. The numbers around it are growing exponentially.

For example, according to The State of AI 2019: Divergence report:

  • “adoption of AI has tripled in 12 months”
  • “Europe is home to 1,600 AI startups”
  • “In 2013, one in 50 new startups embraced AI. Today, one in 12 put AI at the heart of their value proposition.”
  • “One in six European AI companies is a ‘growth’-stage company with over $8m of funding.”

The transformational opportunities this technology offers will shake the very foundations of our societies, creating a massive volume of opportunities and growth and posing equally huge challenges.

To be able to develop AI that empowers growth and evolution, we must first understand it and continue learning about it as it changes and develops.

Healthy landmarks and an accurate self-education about AI leads to:

  • making better decisions about building and using AI, both as a business and as an individual
  • a clearer understanding of its impact, both positive and negative
  • a more responsible and accountable way of doing AI-powered business
  • a stronger grasp of potential risks and possibilities for individual users
  • a more constructive and objective perspective on the inherent changes AI will bring about in our lives.

We believe it’s necessary to demystify this technology and to “translate” the abstract principles that govern it so everyone can understand what’s going on. People fear what they don’t know and fear never leads to good decisions.

For example, we’d like to help non-technical specialists read statements like these from a perspective of potential for growth and not the “robots will take our jobs” cliche.

“AI technology is important because it enables human capabilities — understanding, reasoning, planning, communication and perception — to be undertaken by software increasingly effectively, efficiently and at low cost.”

It’s not humans vs AI. It’s a matter of collective intelligence where technology complements human expertise, creativity, and judgment.

Now it’s time to get practical.

What is Artificial Intelligence

A less-than-final but useful definition

Andreas Kaplan and Michael Haenlein from ESCP Europe Business School define AI as:

“a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”

By no means is this a perfect definition. In fact, as Kaplan and Haenlein themselves admit:

“AI is still a surprisingly fuzzy concept and a lot of questions surrounding it are still open.”

For example, you’ll read articles where AI, machine learning (ML) and deep learning (DL) are used interchangeably. This is incorrect, as both ML and DL are subsets of artificial intelligence.

all ML = AI BUT all AI ≠ ML

all DL = ML BUT all ML ≠ DL

If you’re not as familiar with AI as people who are deeply involved in it are, you may be surprised to know that the term artificial intelligence is over 60 years old!

The term was invented in 1956 by assistant professor John McCarthy from the University of Dartmouth. He used it to describe the ample concept of hardware or software that exhibit behavior which appears to be intelligent.

Nowadays, the term preserves its breadth. However, advances in research and development have defined some of its aspects which makes it easier to grasp them.

For instance:

Machine learning enables programs to learn through training, instead of being programmed with rules. By processing training data, machine learning systems provide results that improve with experience.”

Source: The State of AI 2019: Divergence

Real-life ML applications include predicting churn or forecasting fraud in credit card transactions.

When it comes to deep learning, you should know it’s often used to work out specific issues. For example:

“Deep learning is valuable because it transfers an additional burden — the process of feature extraction — from the programmer to their program.”

Source: The State of AI 2019: Divergence

In practice, deep learning aims to emulate the brain, whether animal or human. The objective is to train these networks of artificial neurons to extract features from data sets that they can later use to optimize processes and solve problems.

Whether we’re talking about AI subsets such as ML and DL (acronyms galore, we know), the objective is to develop technology that can learn through practice.

This is one step further than current rules-based systems which have inherent limitations but it’s nowhere near the onset of technological singularity.

Where unrealistic expectations about AI come from

The interesting thing about AI — one of many — is that it simultaneously exceeds and falls behind our expectations of it.

Our unrealistic assumptions about AI stem from a series of factors, including:

  • Shortage of technical knowledge — AI and its subsets require a specific skill set that is not widely available
  • Confusion about the terms — as we’ve explored, using AI, ML, and DL as synonyms is incorrect and produces further confusion for non-technical aficionados
  • Lack of distinction between narrow AI (ANI) and general AI — where narrow AI (or weak AI) only works in task-specific contexts; this is the type of AI we have today
  • Insufficient educational content that helps clarify concepts — there’s a technical content about AI and there are superficial, fragmented bits and pieces that glide the surface but not much in between, which is something we’re trying to address with our work
  • Biased media portrayals — the media hype relies on shocking headlines, so we rarely see balanced approaches on this topic, which is why the general perception continues to be distorted
  • Emerging tech that hasn’t yet been established or regulated — the lack of established players, norms, and regulations leaves plenty of room for guesswork and confusion, which is fairly normal at this stage
  • Lack of transparency in the field — there are still plenty of organizations that work in silos, working hard to protect their competitive advantage; this leads to speculation and uncertainty over what’s going to happen with AI in real life
  • Negative examples — these also contribute heavily to the idea that AI is to be feared because it will destroy societies and enslave humans to robot overlords; Facebook’s numerous mistakes and China’s social score system are just two examples that give AI a bad rep
  • Science fiction and Hollywood movies — we couldn’t help but mention the impact that media products have on people, as robots have been portrayed as evil attackers in dystopian futures for as long as we can remember.

A combination of these factors leads to a crooked picture of what artificial intelligence is, does, and can become.

Understanding AI is a matter of calibrating our hopes and dreams based on factual knowledge. You can get a firm grasp of artificial intelligence development without being highly technical. It’s actually one of the things we’re trying to accomplish here.

The truth is:

“The capabilities of AI systems have reached a tipping point due to the confluence of seven factors:

* new algorithms;

* the availability of training data;

* specialised hardware;

* cloud AI services;

* open source software resources;

* greater investment;

* and increased interest.”

“A virtuous cycle has developed. Progress in AI is attracting investment, entrepreneurship and interest. These, in turn, are accelerating progress.”

Source: The State of AI 2019: Divergence

AI is misused and misunderstood as a term because:

  • It’s still new outside the tech world and unknown = scary
  • It’s very complex and abstract which makes it difficult to relate to or develop an interest in for most people
  • It’s highly technical, so only elite developers, mathematicians, and PhDs truly understand its intricacies
  • There are no current agreed-upon standards which makes it easy to launch bold claims as most people don’t have the knowledge to evaluate them
  • It’s unregulated and the lack of formal context contributes to this playing field where anyone can declare they’re doing advanced tech in spite of a very different backstage reality.

Incumbents like Google, Apple or Microsoft and legitimate and illegitimate challengers are currently trying to make headway in this exciting playing field. They do it because the stakes are higher than in any other form of tech, including blockchain and IoT:

AI may be the fastest paradigm shift in technology history.In the course of three years, the proportion of enterprises with AI initiatives will have grown from one in 25 to one in three. Adoption has been enabled by the prior paradigm shift to cloud computing, the availability of plug-and-play AI services from global technology vendors and a thriving ecosystem of AI-led software suppliers.”

Source: The State of AI 2019: Divergence

AI misconceptions you can let go of right now

People in tech and in the business world have great expectations for AI. We’ve been aboard the same train for over a year, since we started building MorphL. It’s precisely this enthusiasm and desire to build helpful AI applications that drives us to do some serious myth-busting.

Our desire is to help increase artificial intelligence adoption in a balanced, responsible manner. This is one of the ways we follow through on this ambition.

1. AI will soon have the ability to think like humans

At this moment, artificial intelligence is already yielding great benefits for those who integrate it into their technology stack.

Benefits include:

  • Innovation for products and services — how they work, what they offer, how they’re delivered
  • Efficiency for processes — automating repetitive tasks, processing large volumes of data, etc.
  • Speed — automated systems don’t need breaks and they can move faster than humans when dealing with certain tasks
  • Scalability — when freed from the constraints of human capacity, AI can enable exponential growth, replicating successful achievements on larger scales
  • Reducing costs — when combining automation and scalability, organizations can save hefty amounts of money by optimizing their processes based on previous results
  • Competitive advantage — owning large sets of data and retaining highly skilled professionals capable of developing AI applications will become increasingly coveted business assets that will fuel business growth.

It’s true that artificial intelligence technology has made significant progress but it’s still nowhere close to human-level capabilities. Here’s a quick example from one of the biggest companies in the world:

“Microsoft reported that its speech recognition system achieved human-level recognition for the first time in history. Improvements are continuing.”

Source: The State of AI 2019: Divergence

Andrew Ng, the most renowned AI scientist in the world, also highlights this when explaining what happens to algorithms once they reach human-level performance on a specific task. It turns out that their progress and accuracy slow down after that point and eventually plateau.

There’s still a long way to go from recognition to being able to make complicated decisions and inferences, so the robot overlords are not here yet. 🙂

However, what Andrew Ng and other AI specialists propose a less polarizing way to look at this:

“If machine learning can automate the mundane and routine things in our world, then we can lean into uniquely human strengths.”

This entire video is a fascinating collection of case studies which highlight how we’re currently benefiting from AI technology.

2. AI works with uncompromised objectivity

As we’ve seen in recent years, technology reflects the shortcomings of human nature. Our vulnerabilities, biases, desires and fears surface in data breaches, social media propaganda and various other examples.

The same can happen with AI if developers and decision-makers don’t make a conscious effort to override their biases while engineering this tech.

It’s no secret that:

“Biased systems could increase inequality. Data used to train AI systems reflects historic biases, including those of gender and race. Biased AI systems could cause individuals economic loss, loss of opportunity and social stigmatisation.”

Source: The State of AI 2019: Divergence

To recognize and remove bias from AI systems is a big responsibility, one that we fully embrace at MorphL. We also acknowledge that one way to ensure this happens is to create transparency through open-source implementations. As a result, accountability to users and the general public will enforce this key principle in our work and other specialists’.

“New, open standards and governance frameworks will boost consumers’ and regulators’ confidence that model-driven decisions are accurate, explainable, and free from bias — accelerating the rate of AI adoption on an industrial scale.”

Source: The State of AI 2019: Divergence

When algorithms are tasked with decisions that affect human life, we must ensure we don’t “automate inequality” along the way.

Ethical considerations and the diversity specific to AI users must be integrated into the core of any system and application this technology supports.

This is also work in progress.

3. AI already has human-level self-teaching capabilities

We now have autonomous cars that can recognize the world that surrounds them and make decisions based on those inferences.

AI-fueled medtech can now identify tumors in medical imagery.

Voice-controlled devices provide us with seamless experiences that we never envisioned 15 years ago.

Deep learning has made this all possible and more! However, these artificial neural networks, sophisticated as they may be, are only able to approximate a human brain’s functions. Its competence is still limited.

Exacerbating AI’s development instead of being pragmatic about it will only exacerbate the unrealistic expectations we mentioned earlier.

The current reality of artificial intelligence is somewhere between these two facts:

AI will help save up to $16bn by 2026 in the medical sector by reducing medication dosage errors.

Source: Artificial Intelligence (AI): healthcare’s new nervous system by Accenture

and

75% of Netflix users select films recommended to them by the company’s AI algorithms (2014).

Source: Mohammad Sabah, Netflix Senior Data Scientist in a statement published on Gigaom

4. AI is so advanced we’re getting close to singularity

By now you’re probably convinced that we’re really a long way from technological singularity. If not, then keep reading.

The singularity hypothesis states that AI can develop into artificial superintelligence (ASI) capable of triggering uncontrollable, self-replicating technological evolution that will fundamentally change the world.

Simply put, not only could AI become self-aware but it could also develop the ability to build other AI systems with similar capacities.

The reason this scenario is so improbable over the foreseeable future is that AI development depends on large volumes of data for training.

“The creation and availability of data has grown exponentially in recent years, enabling AI. Today, humanity produces 2.5 exabytes (2,500 million gigabytes) of data daily (Google). 90% of all data has been created in the last 24 months (SINTEF).”

Source: The State of AI 2019: Divergence

In spite of this wealth of data we’re producing, it’s not nearly enough to make the sudden jump to artificial superintelligence.

We may have gone from documents and transactional data all the way to metadata collected by sensors but the latter has only been happening for a couple of years. Data production rates are bound to increase exponentially, thus fueling AI evolution, but it’s unlikely to suffice for bridging the gap to ASI.

Besides data, organizations also require massive financial resources to build towards artificial superintelligence. That and the right hardware to support this colossal need for processing power.

The report that inspired this article indicates fast progress but fails to suggest that we might be on the brink of ASI:

“Investment dollars into early stage AI companies globally have increased fifteen-fold in five years, to an estimated $15bn in 2018. (CB Insights, MMC Ventures)”

“In 2019, as well as enabling next generation AI in the cloud, custom silicon will transform AI at the edge by coupling high performance with low power consumption and small size.”

Source: The State of AI 2019: Divergence

5. AI has made sudden progress

It’s been over 60 years since AI research began. From unsophisticated systems to current applications, there’s been a huge deal of improvement. Nonetheless, 60 years is not exactly… sudden.

“Since its inception in the 1950s, AI research has focused on five fields of enquiry:

1. Knowledge: The ability to represent knowledge about the world.

2. Reasoning: The ability to solve problems through logical reasoning.

3. Planning: The ability to set and achieve goals.

4. Communication: The ability to understand written and spoken language.

5. Perception: The ability to make deductions about the world based on sensory input.”

Source: The State of AI 2019: Divergence

In the last couple of years, a series of key factors have accelerated AI development, hence the false impression of swift progress.

More data, better hardware, open-source frameworks, and sizeable investments contributed to current AI systems and applications. These achievements rest on the shoulders of generations of specialists who worked extremely hard to advance AI layer by layer.

In terms of perception and adoption, artificial intelligence is now crossing the chasm from early adopters to the early and even late majority, which is an indication of it going mainstream.

We can expect this to be a powerful catalyst for even more vigorous growth in every aspect of this tech sector. At the same time, it’s important that we never forget that progress and big achievements are always incremental, in spite of what it may look like when reflected in the media.

How MorphL is creating accountability and transparency

We wanted to address these AI myths and help debunk them for anyone who’s interested because we believe progress can be influenced by them, both for the better and for the worse.

We are fully invested in following through on our guiding principles to ensure our work contributes to AI progress in a way that’s ethical, fair, and accountable.

Because we’re building AI to solve complex problems, the technology itself can’t be oversimplified if it’s to address and solve sophisticated issues.

As Cassie Kozyrkov, Chief Decision Intelligence Engineer at Google, puts it:

“Simple solutions don’t work for tasks that need complicated solutions. So AI comes to the rescue with — surprise! — complicated solutions.”

However, it’s part of our mission to ensure the AI technology we build is transparent in the sense of being explainable.

Practical experience has taught us that trust in AI hinges both on thorough testing and on our ability to explain its decision-making process.

Showing rather than telling has always been more effective for building confidence with humans. So in order to provide non-AI specialists and enthusiasts with clear and reliable information about this side of tech world, we’re doing two things:

  • Documenting our progress on this blog, where we intend to publish practical case studies and insights
  • Sharing the knowledge we acquire through the open-source MorphL community edition.

We’d love to know if this article helped clear some of the confusion around AI and AI-related concepts. Plus, if you have any suggestions of what might make useful additions, we’re just a comment away!

Originally published at morphl.io on April 3, 2019.

--

--

Ciprian BORODESCU
MorphL
Editor for

Head of Algolia Romania & AI PM | Ex-MorphL (Acquired by Algolia) | Get Your AI On! Podcast Host