AIpower: Myths & Realities of AI

Ségolène Husson
Preligens Stories
Published in
6 min readFeb 12, 2019

Sometimes belittled, too often misunderstood, Artificial Intelligence is a holdall in the eyes of the public, and a lot of concepts fit inside.
That’s why, at Earthcube, we advocate for a clearer understanding of AI.

Introducing AIpower, an article series to better assimilate the concept of AI.

AI: what really lies behind the fuss

Artificial Intelligence. noun. abbreviation AI: the study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognize pictures, solve problems, and learn.

Lots of frustrations are linked to the concept of AI, which is usually due to a misunderstanding of what AI can really achieve, and where its limits stand.

In the 90s, the press declared that the machine had taken over men when a chess champion lost against a computer. This time period coincided with the arrival of Terminator, and the introduction of machines into our daily lives.

But the reality is way more moderate. The time when the machine completely overtakes men is not there yet.

Yes, Artificial Intelligence is already very present in our day to day activities, through search engines, automatic translation tools, or personal assistants. But we are still really far from the SciFi movies we were promised: J.A.R.V.I.S, Iron Man’s personal assistant is not quite ready.

Given that AI is already very present in our lives, it is interesting to observe how it is going to evolve over time.

According to Gartner’s Hype Cycle for Emerging Technologies, AI is an early innovation trigger component and a facilitator of more than half of the upcoming disruptive technologies, regardless of their form (autonomous vehicles, deep learning…), when the other half is linked to innovations such as AR, VR, blockchain or quantum computing.

Gartner’s Hype Cycle for Emerging Technologies, 2018

Another illustration happened in the summer of 2018 when a Chinese company launched a competition between machines and radiologists to test who would perform best at detecting tumors.
The same way the machine had already beaten men at chess or GO game, it beat radiologists with a 50% higher performance compared to humans.
As a result, doctors, who are referred to as intelligent people, could be caught up by machines, which are not supposed to be considered as such.

There is then a paradox between the AI that achieves amazing things but has still not reached the level of performance we were promised: a truly intelligent machine.

To us at Earthcube, the secret to understanding this paradox is to understand that the term ‘Intelligence’ in this context is a deceptive cognate.
Human and Artificial Intelligence are highly different, they both have strengths and weaknesses but which are not alike, yet complementary: used together, they achieve incredible things

Hence, here is our definition of Artificial Intelligence: AI is a tool that was invented to serve humankind, that brings pure strength, and that is able to support numerous time-consuming and repetitive tasks for humans to save time.

Learning is easy, understanding, on the other hand, is where the limits of the machine appear

If you look at the scene above, whether or not you are an expert in geospatial imagery, a detail is likely to draw your attention: this shape, under the orange box, is a submarine. Knowing that, and per the nature of the object, your concern will be attracted to it.
But then how does the human brain navigate towards this behavior?

Let’s try and analyze it.

Even if you have never seen this place, it is pretty straightforward to say that it represents a harbor (sea, boats, logistics material…): your general knowledge and common sense take over and you do not even need to think about it.

On the other hand, you know what kind of vessels submarines are, and the fact that this particular one is not aligned with the docks make you realize that it is on the move: an action is definitely happening there.

Now we know that the human brain barely needs any efforts to come to this type of conclusion. But are machines the same? Well, it is pretty straightforward as well: there is no way a machine could have come up with the same results on its own, despite the fact that with the right training, it would have spotted the submarine easily.

It is crucial to really understand the difference between learning and understanding in order to properly trigger the questions we are going to ask the machine to obtain actual results. This is also why we haven’t yet reached the concept of ‘Strong AI’

The different types of AI

For decades, expert systems have been used to automate a certain number of repetitive tasks, allowing human operators to consistently gain time.

Basically, you teach a system how to deduce an output based on the input that’s fed to it, and according to a certain amount of rules that are decided by man.

This approach is very efficient for problematics well framed where men completely understand the scope, yet limited in terms of complexity since they are programmed by man. This is why no expert system can ever reach the performances of an inductive system, which is able to compare millions of parameters together to obtain results.

When the scope is harder to define, or when the field of possibilities is too wide, it is thus replaced by ‘weak’ AI, which is really powerful since it can learn how to define the rules by itself, as well as the statistical correlations between the input and the output.

Basically, the machine has learnt through enough examples to set up the parameters and settings by itself in its own language to be able to recognize what is expected of it.

A lot of examples are still needed, thus still supervised, but the machine now learns on its own. This is where we stand today.

Eventually comes the ‘strong AI’ (equivalent to human intelligence)

The gap here is much higher: in order to reach that step, the machine would have to go from learning to understanding, from an inductive to a deductive behavior, to being able to learn and adapt on its own, without supervision.

This is when the machine reaches common sense. Yann LeCun, one of the founding fathers of modern AI, defines it as such: “common sense is the ability to fill the blanks: predicting any parts of the past, present or future precepts from whatever information available”

Today, the machine is not able to achieve this, and we are still far from having defined the scope of this problematic.

Indeed, the state-of-the-art of the most advanced researches is still focused on the evolution of the current concepts, when a true disruption would be needed to reach the strong AI.

Research has definitely accelerated for the past decade, yet it remains impossible to predict when such a revolution would appear: 10, 20, 30 years from now? Nobody knows, but it is unlikely to be in the near future.

Watch Arnaud Guérin, Earthcube’s CEO, explain this concept (in French), at Ecole Normale Supérieure de Paris.

--

--