Don’t Believe The Hype: Understanding A.I. Trends Early 2023 (Non-Technical Guide)

M. Hammad Hassan
12 min readJul 13, 2023

--

Sure, AI seems scary and tempting but are we measuring its potency with the right scale? Who sets the scale for us? Are we judging its power accurately? Who decides how we measure it, and what information do we gather from those measurements?

These are just some questions which will shed some light on how A.I. today (in 2023) should actually be perceived.

This article aims to clarify the concept of Artificial Intelligence for non-technical people as well as give an in-depth insight as to what today’s AI is capable of. I will also give you a brief insight as to what sets the motive of today’s AI and briefly walk you through some important conceptual information. Don’t worry, I will keep my language as basic as I can.

And YES, this article IS about ChatGPT and its army of AI backed by different organizations.

So without further ado, let’s dive into it …

Photo by Andy Kelly on Unsplash

1- What Is Artificial Intelligence?

One thing to know before you assume anything, the A.I we have is not completely able to actually THINK as we assume it to. What we currently have storming our markets is just a clever piece of mathematical program which can simulate the thinking process which is far from the real thinking that we humans do.

Through a functional point of view, Artificial Intelligence has two types i.e. Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI).

📌 Now you must be boggled with all these tech jargons, but remember that I have dedicated this article for non-techy people. This means that my job here is NOT to trouble you with tech terms but show you the way through the age of automation in the easiest way possible.

Coming back to the topic of ANI and AGI … here are what they are and what they do in a nutshell:

Artificial Narrow Intelligence(ANI) 👉 This A.I is only capable of simulating very specific or Narrow functions. One good example is an A.I that can only play chess. Although the said A.I would be exceptionally good at playing chess, it is all that this A.I can do. In other words, it can do nothing else. No need to fear this one.

Artificial General Intelligence(AGI) 👉 This is the A.I you should be worried about, as it is capable of completely simulating a human brain and it’s thinking capability. This means that this A.I can generally do everything that a human can do … but faster, better and more efficiently. This includes coding, driving(with built-in license), converse in human language, show empathy, adapt to new environments, exhibit common sense and more. This is actually fearsome and threatening if you wish to keep your day job. Thankfully it doesn’t exist … not yet anyways. Also note that AGI is not good at one thing or programmed to do one specific task. It is generally good at everything that a human can do.

What you see taking over the market is ANI which is only capable of doing a specific task. As for ChatGPT and DALL-E which you are probably afraid of are also ANI.

One Common Misconception …

I should tell you that you should not mix the idea of robots with AI. They both are different things, robots do not necessarily need an AI brain to work.

2- How is it so much better than us? (where does it get the talent?)

It’s a smart robot with lots of calculations going behind its metal head, so it’s bound to be better than us who whip out a calculator seconds after we hear the math problem … DUH! … Not exactly though.

As I said these are specialized mathematical algorithms, but they are simply useless as they are.

📌 I use both words “algorithms” and “models” interchangeably, as anyone else might use it.

What they need to work and be a smart ass is data, and lots of it … LOTS OF IT. The only thing that drives AI to do what it does is DATA.

I’m not talking about terabytes of data here, I’m talking about more than that. Think about all the data that you can collect from Facebook or Google users. Each user has tons of data and there are billions of users. This should give you a clear picture of what kind of data I’m talking about. In the market this huge mountain of data is called Big Data(not quite a fancy name).

Now if you want your mathematical model to learn to classify between a picture of a dog and a cat, you will need to train your model with tons of data, in this case it will be the pictures of dogs and cats for each of them you will explicitly define a label so that the model would know if it should call a certain picture a dog or a cat. When this training is complete, you will let the people around the world use your mathematical model, and they will show your model a picture of a dog or cat to see if it can identify that picture correctly. Although there are many factors which govern how accurate your model is, one big factor is the amount of data you train your model on. The more training data the better(although it’s a lot more complicated than that).

📌 Note that the above method for training a mathematical model is just one of several ways we do it.

3- Generative Models/AI

With the basic knowledge of Artificial Intelligence, you are now capable enough to understand what a Generative Model is.

A generative model has only one purpose, that is to generate stuff in accordance with the data it has been trained on. Github CoPilot for example has been training on people’s code found in Github so that generative models specialize in the coding domain only. DALL-E or Stable Diffusion (image generating AI) have been trained on digital arts or digital versions of original artworks therefore they specialize in generating artworks.

Of course, the stuff they generate somewhat resembles the stuff humans made. For example an artist made a digital artwork of a horse running over some waves. Something like this:

Source: https://i.imgur.com/AVIe1Zm.jpg

Notice that both the types of models I mentioned above require some sort of prompt written in English Language for them to generate a relevant image or code snippet. That prompt is read by the AI , then it breaks it into keywords and then it tries to extract meaning from it. If the model is good enough in understanding you and you are good enough to write a good prompt with all the relevant keywords then you are likely to get what you wanted.

While the picture I showed above is created by a real artist, I prompted an AI to generate an image that somewhat resembles the one above, and this is what I got.

Image generated by Stable Diffusion

📌 In case you are wondering what i wrote as a prompt to generate this image, here’s what it is:

“Two white camargue horses running in surf on beach provence france”

You could make a generative AI for anything you like. For example, you could make an AI which can generate some sound or music for you. This is only possible provided that you have enough “sound” data to train your model (as I already described in section 1).

A.I learns from us humans and uses its power of superfast calculations to do what we do, but faster and more accurately. Since it learns only from the vast but “limited” data that we provide it, therefore, we can safely assume that, if we don’t innovate, it won’t be able to replicate our innovation as well and as a consequence, It can not innovate by itself.

📌 You might have heard about “Computer Vision” out there in the wild. This example was of a Computer Vision model. It is a branch of AI that enables machines to understand and manipulate visual information such as images or videos(set of images). It can also perform tasks such as outpainting, inpainting, and variations on existing images.

4- How AI was Being Used Before ChatGPT & How Does ChatGPT Compare To Them?

Although ChatGPT works on a “general” level (meaning that it can process not only code information but is also able to give you an exercise routine, breakfast recipes, essays etc), AI before was used for very specific tasks like in search engines to determine the best list of results to give to the users or in voice assistants to help users with basic tasks like setting reminders or answering simple questions by understanding voice data and figuring out what it means in terms of 0s and 1s. Maps/navigation services by google help you to decide which is the best route to take to a desired destination. Facial detection AI can help in attendance systems where the camera sees your face to recognize if you are authorized to enter or not. Auto correct helps pick the next word in you sentence by guessing. Some supermarkets even introduced smart trollies which bill everything you put in it and charge you directly, so you don’t have to wait in a point-of-sales again. There are many many examples like this ranging from everyday applications to highly specialized ones like air traffic control.

To be honest AI was already in the market, but not as visibly as it is now. Moreover the only face AI has now is in the form of systems which can generate stuff like images, text, audio or video.

Since ChatGPT has several features like image generation, text generation, sound generation (and it could get more features in the future) it is because several different models were combined to make it possible. All those models which ChatGPT uses today are just like how a combination of several different sensory organs give a human being able to function in different ways.

5- Is ChatGPT an Innovation?

There has been and still has ongoing debate about whether ChatGPT is an innovation or not, since OpenAI thinks it is best to keep their trade secrets … well … secrets(but they revealed it afterwards). Only the experts in this field can speculate if it really is an innovation or not.

Meta’s Chief AI Scientist, Yan LeCun says:

“In terms of underlying techniques, ChatGPT is not particularly innovative,”

He further added

“It’s nothing revolutionary, although that’s the way it’s perceived in the public, … It’s just that, you know, it’s well put together, it’s nicely done.”

A lot of people will doubt this guy’s words because we haven’t seen anything like ChatGPT before. But since it is coming from a highly renowned scientist, it goes without saying that he certainly has a higher degree of understanding of how machine learning works. So let’s give his statement some thought.

Well, he is correct that ChatGPT uses the techniques put forward by other hard working scientists from around the globe and uses software development techniques to integrate a full working prototype of a future chatbot. But when he says that it is being hyped up it’s because it is. Those models which are at work behind ChatGPT were already present before its time. It is like an autonomous synthesis of existing knowledge.

To further prove the facts stated by Yan LeCun, there are several authentic resources which reveal OpenAi’s secret sauce. Jan Leike, A scientist at OpenAi who worked on ChatGPT, tells us in an interview with Will Douglas Heaven, a senior editor for AI at MIT Technology Review that:

In one sense you can understand ChatGPT as a version of an AI system that we’ve had for a while. It’s not a fundamentally more capable model than what we had previously. The same basic models had been available on the API for almost a year before ChatGPT came out. In another sense, we made it more aligned with what humans want to do with it. It talks to you in dialogue, it’s easily accessible in a chat interface, it tries to be helpful. That’s amazing progress, and I think that’s what people are realizing.

In addition to that, the media uproar started spreading the newly discovered sauce to hype up the research that gave a lot of people a false sense of the capabilities of the model.

6- How Authentic Those “Sparks of Intelligence” Are?

Several people like to say that the latest AI technologies are exhibiting sparks of “intelligence” which in public is compared with human intelligence. This is a myth (as we have established before), but I still would like to state a few facts.

Yes, AI systems of today are intelligent but that “intelligence” has nothing to do with the intelligence present naturally in humans. Instead that is some other kind of intelligence which we all know as ANI(described in section 1).

Over the years we have derived several techniques for measuring human intelligence such as IQ, EQ, SQ and AQ, but they all were not suitable for measuring the natural intelligence of humans. Moreover, these measures cannot be simply applied to an AI system. AI systems need a technique tailored for their specific narrow intelligence in order to measure how good or bad they are performing.

For example, The DALL-E 2 model would need some sort measurement which looks at the input prompt and the resulting generated image, and compare both of those to see how much of the sense that the prompt made was reflected onto the resulting image. But the standards have to be low as DALL-E 2 does not innovate(this was established in section 3). That is one way of thinking about it.

📌 I’ll write more about the measurement of intelligence in the near future since this topic has a lot to consider. I’ll make sure to include the link right here.

7- Is OpenAI The Only One With A Super Chatbot?

Recently lots of alternatives to chatbots like ChatGPT have sprung up. I won’t be listing out each one them but if you are curious you can have a look here:

Although most of the new AI chatbots you see out there today are good at performing like ChatGPT because … well … they actually use OpenAI’s ChatGPT model behind the scenes. Examples of these would include Bing Chat, Character.AI and JasperChat. Even Google released their own BARD chatbot.

But thankfully we also have completely open source versions of an AI chatbot which are not backed by corporations and work for the benefits of their business model. For example, HuggingChat and GPT4ALL which compete with ChatGPT using the GPT model itself but the data for these are not handled by big tech companies.

On the other hand, some people even tried creating a general chatbot from scratch using some other models. ChatRWKV is a good example of that. Instead of using the GPT model, it uses 100% Recurrent Neural Network(RNN).

📌 RNN is a high level jargon, I know this article is intended for non-technical people. So just know this, RNN is just another mathematical model which work differently behind the scenes

8- Myth Of Prompt Engineering

Prompt engineering is exactly what it sounds, except the engineering part. As discussed before, prompt is basically a combination of words in a human language (only English as of now but that can change) which you feed chatbots like ChatGPT. The chatbot then parses and tries to understand the meaning behind the prompt. When it’s finished parsing, it will try to respond to you within the context of what you just said. Now think for a second, where is the engineering part?

Some people might say, well the chatbots are sensitive to keywords which are fed into it therefore there might be some tricks we can use to identify the specific keywords for our needs to get a satisfactory response from the chatbot. Maybe that’s what people refer to as “engineering”. But guess what? We already had a profession which does exactly the same thing and a bit more. That profession is called Search Engine Optimization (SEO) where a person will find the right keywords to put into your website so that your website has more chances of being in the top 10 or top 20 search results. And no, SEO does not have anything to do with engineering. So the job posts which refer to Prompt engineering are simply bogus.

To add onto that fact, anyone can easily be a prompt engineer without any training because you will definitely get the hang of prompting once you have chatted with the chatbot a few times. It’s just like texting. Also you need to have good English for prompting but that is only it.

Thank you for reading it till the end!

That would be it for this article. Make sure to give it a 👏 if you found it helpful. Or feel free to let me know📣 your thoughts on this.

--

--

M. Hammad Hassan

Hey there! I'm a Data Scientist and a Full Stack Developer.