If You Are Still Calling AI Artificial Intelligence, You Are Wrong

Preslav Rachev
The Startup
Published in
5 min readAug 15, 2020

--

Originally published on my blog.

OK, let’s be fair. The second half of AI does stand for Intelligence, but not the kind you might be thinking about. The I in Intelligence refers to our own, human way of reasoning because the A stands for Augmented. That’s right. If you have been calling it Artificial Intelligence all this time, I propose to pause for a second and consider the term Augmented Intelligence.

Much of the physical Universe is discrete. Objects are created by a finite number of atoms. Text is a sequence of a finite set of words. Images consist of a finite number of dots each representing one of a finite set of colors. Do you know who is good at working with the discrete world? That’s right, computers.

I see people marveling at GPT-3 the same way they did marvel at the myriad of photo style transfer experiments a few years ago. While those examples are by all means outstanding achievements, none would pass a true Intelligence test.

Specifically, the case of GPT-3 is a well-known application of something called Infinite Monkey Theorem (IMT). IMT stipulates that a monkey hitting random keys of a typewriter for an infinite (or very long) amount of time, will eventually produce coherent human text. Like, the works of Shakespeare, for instance. While improbable in a human lifespan, IMT makes sense from a theoretical standpoint. Producing a 500-character piece of readable text is only a matter of hitting 30⁵⁰⁰, or …

3636029179586993684238526707954331911802338502600162304034603583258060019158389548419850826297938878330817970253440385575285593151701306614299243091656202578002177124784764345012534283656581320997259037159015257872800838599013979537761000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000

keys and you will get it.

Since we have fast machines, we might as well use one to “hit the keys” faster. And while hitting the keys, we can “tell” the machine to observe the frequency of certain characters appearing next to each other. Then, to certain character pairs co-occurring in the same word. Then, to certain words frequently appearing in the same sentence. Suddenly, what was once a tedious brute-force random hit-and-miss process, turned into a tamed version of the same. The machine would still rely on a random factor, but a much smaller one, thanks to all the stats collected in the first phase. One might think of it as shaving all the zeroes down from the umber above — a more down-to-Earth number for a machine to crunch.

This is in essence how GPT-3, or for that matter, all of what you call AI works. By all means, a very complex process, but one void of magic there or signs of thought emergence. It’s a strictly definite and discrete problem, and the machines of today seem to be doing a good job of solving it.

AI Is a Mirror and a Prism of Human Subjectivity

Here comes the tricky bit. If a machine can crack Shakespeare, it could probably generate other pieces of text, many of which equally groundbreaking. Ones, that the human mind has not produced so far. Right?

Yes and no. A machine may babble coherent text (or any other form of content) out ad infinitum. Yet it has no way of understanding whether the output is good or bad. Five machines will look at a certain piece of content and come up with the same answer of whether it matches what they were initially fed with. In contrast, five people will look at the same and come up five different answers. Get those five people to do the same over time and chances are, their answers will also change over time.

To be human is to be different, to be subjective. The machine has no concept of subjectivity or a personal bias. Those can only come from the true reasoning of the human mind. This is the reason the machine learning algorithms in production today are biased towards one aspect of society or another. A human fed them with biased inputs containing one’s subjectivity. Feed them with someone else’s subjective inputs, and they will start acting correspondingly.

Towards Augmented Intelligence

The fact that I tried to diminish the impact that AI is having on society shouldn’t stop us from trying to improve it. We should do this however, with the right idea of what AI can do to augment the human intelligence, not that it will one day replace us altogether.

Augmented Reality (AR) is not trying to replace reality, but make it more interactive and accessible. Similarly, AI won’t replace the human brain, but can help it focus on the bigger picture. Take the example of GPT-3 again. We can let AI produce text, but someone has to give the AI the initial spark of what to produce. Afterwards that same one has to comprehend the produced content and apply a subjective judgment, of whether it is good or bad. That same one may tweak the output and feed the algorithm with it, achieving a version of the original output that is even more to the liking of the human being. If another human goes through the same process, the result will be different.

The Curse of AI

This is the sweet spot of AI — using it to express human uniqueness in new ways. This is also its biggest threat to humanity. Not because intelligence may suddenly emerge and annihilate us all. Rather, because AI is a mirror that one may not always know one is looking at. Similar to an animal looking at a physical mirror, one may look at the outputs of an AI algorithm and think those are real, simply, because the mirror was shaped with the certain individual’s implicit inputs.

By our mere interaction with the world, we leave implicit traces. A smart individual can use those to create filter bubbles we may never get out of. As I am writing this, millions of individuals are receiving their daily portion of algorithmically cooked up filter bubble. Who knows, maybe this isn’t a text written by a human, but only commissioned by a human, targeting a group of sceptics and conspiracy theorists. Who knows. The point remains valid though — AI won’t do us harm alone. It’s the humans behind, whose intentions might. Thus, combating the dark side of AI isn’t about fighting an invisible enemy, but a real one, flesh and blood. One with fears and biases.

AI is an augmentation of our strengths, but also, of our biases and fears. It can be our biggest friend, or our biggest foe. It’s up to us to get to know it better and use it for the right cause. First and foremost, we need to agree on what to call it, though.

--

--

Preslav Rachev
The Startup

I am a genuinely curious individual on a mission to help digital creators and startups realize their vision. Follow my journey: https://preslav.me