How intelligent is AI?

Alex Kozhevnikov
Voice Tech Podcast
Published in
4 min readJan 9, 2020

Current AI systems aren’t intelligent as they are presented — they are dumb. Unfortunately, the phrase “artificial intelligence” is getting so common and increasingly being applied to anything that is automated.

How I feel when average people are talking about “AI threat”

But what is true intelligence?

According to Wikipedia, “Intelligence … can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.
From this point, any service which changes its behaviour because of new data could be truly intelligent. But it doesn’t have any concept of people’s intelligence like self-awareness, learning, emotional knowledge, etc.

To define a system as smart or not, industry usually defines several KPIs of the systems (e.g. Goal Completion Rate, Fall Back Rates) and minimal percentage (usually depends on the customer & their goals).

How does AI need to change/upgrade to actually become smart?

The main current shortcoming is to mix pattern analysis e.g. machine learning (when you have gigabytes of data) with fixed rules (domain or case-specific meaning). Why? Classical machine learning methods just find some rules and patterns in data, but what if you change the data? You have to re-train your neural network.

Sad but true — AI for today is more about statistics than intelligence.

Possibly, upcoming integral neural chips (such as CPU/GPU but for performance training of neural networks) would hugely decrease time for learning, but won’t solve the problem at all. And with no general AI, you have to customise your service to make it work for you.

Is there a step before General AI?

General AI may appear in the next 10, 20 or 100 years — no one knows exactly when. And this is the biggest challenge because without real human-like general AI, we can’t reach a new level of “state-of-the-art”. Why? For example, a child doesn’t need to be bitten by a dog thousands of times to learn that “THAT IS DOGGO”. But that might not be the case with a machine learning system — “just give me millions of examples and train me. So, you train, but your photos of doggo from the other angle? Thus I don’t see any doggo.”

Beware of doggo — as AI said. That is what you get when don’t make it specific.

Build better voice apps. Get more articles & interviews from voice technology experts at voicetechpodcast.com

That’s why image recognition and chat & voice bots really misunderstand you and your data. You implemented a great chat bot for an insurance company that solves 95% of the user requests? It can solve only 10% of the user requests if you connect it to the bank industry. The computers are still intelligent enough to perform computations impressively fast, but can’t solve “meaningful” issues.

It’s so hilarious when you hear about “99,9%” working AI solution on any conference

What is your approach? How are you doing things differently?

So in this case, just use the principle that I read in Peter Thiel’s “Zero to One”, that sounds like:

“Do not oppose man and machine, but make them work together.”

That is also what we are doing for our voice conversational bots: the bot asks and answers by scenario, but when we find that something has gone wrong — just connect seamlessly (you won’t hear it) to a real human agent.

According to our statistics, if people haven’t solved their issue with a conversational voice bot during their first experience, they will reject subsequent conversations with bots with a probability of 63%. That is what we call “tandem assistance”.

Typical stock image for google search “Voice bot”. Goddamn, really don’t understand why people draw a robot with a headset to represent a voice bot.

Good example — exoskeleton for people (that improves their possibilities) versus full android robot.

Exoskeleton improves people. Maybe we are looking for something like this, not for robots? From the BBC article: https://www.bbc.com/news/health-49907356

In my opinion, I see that way is more realistic than a “fully automated world” according to past experience (note The Luddites movement, which eventually became servants of such machines).

Something just for you

--

--