Are you sure you know what Artificial Intelligence (AI) is?

Telmo Subira Rodriguez
DRILL
Published in
6 min readNov 15, 2023

AI is trending nowadays, it’s hard to ignore that. But the term is not something new: AI already existed way before the ideation of current huge Transformer networks and impressive Large Language Models (LLM).

Long before terms like Deep Learning (DL), Machine Learning Operations (MLOps), and Artificial Neural Networks (ANN) were common in the news and internet posts, Artificial Intelligence had already been born. Born, and healthy growing years before the launch of famous tools like ChatGPT, Stable Diffusion, SAM, LLaMA, Bard, and AlphaZero.

However, the name Artificial Intelligence has been controversial since its first days. Claiming for machines to be intelligent was something big, especially in a time in which the limits of what a computer program could do were shorter than we know today. The origins of computer AI date back to the decades of 1940 and 1950, even when we could find many previous traces of the design or construction of automatic machines in human History.

An old computer. AI was already alive back then. Image generated with MidJourney.

But the real question is: what is it? Is Artificial Intelligence an umbrella term for computer programs that can actually think as we humans do? Is it the name for computer programs that can actively learn and adapt themselves? Or something else?

The answer is not easy, but I will try to clarify the idea.

Seeking the definition of Artificial Intelligence

Russel and Norvig wrote a marvelous introduction for their infamous book Artificial Intelligence: A Modern Approach, in which they discuss what AI is and how it became what it is. Even the authors don’t have a complete and absolute definition of AI, but they give us brilliant hints:

The field of artificial intelligence, or AI, is concerned with not just understanding but also building intelligent entities — machines that can compute how to act effectively and safely in a wide variety of novel situations.

Artificial Intelligence: A Modern Approach. Russel and Norvig.

In general terms, what we understand today for AI is the development of computer systems that can perform tasks that typically require human intelligence. These tasks include many fields like visual perception, speech recognition, decision-making, and language translation or generation. As Russel and Norvig say:

AI is relevant to any intellectual task; it is truly a universal field.

AI covers any field, any intellectual task. Image generated by MidJourney.

Following this broad definition, the literature usually considers at least two main types of AI:

  • Narrow or Weak AI. The program is designed and developed to perform a particular task, and/or in a specific environment.
  • General or Strong AI. The program is designed and developed to perform any task, learning and adapting from any environment as humans do — better or worse.

As you can guess, current AI models are still stuck in the first group. Even when the development of modern LLMs is one of the most promising paths in the search for a General AI, there isn’t any program yet capable of complete and autonomous reasoning and adaptation. Computer programs, be they AI or not, are still running with many internal and external limitations. However, AI research is currently focused on the development of more general and adaptive models, capable of performing multiple tasks and learning from different environments, tending bridges between the two types of AI.

But how can we differentiate an AI program from any other computer program?

Think about this: any Python script that sums up the prices of your grocery list and adds the taxes to the results, could be considered “a computer program that performs a task that typically requires human intelligence”.

In the end, you make use of your own human intelligence and your math skills, to sum up those prices by yourself. It is a human task, requiring human intelligence. So why shouldn’t we consider AI that Python script?

A script that sums up the prices of your grocery list. Can we consider it an AI? Image generated by MidJourney.

Many experts will tell you that AI systems are also designed to learn from experience, adapt to changing inputs, and perform tasks without being explicitly programmed for each specific task. And that is true for many modern AI systems (remember: we are on the seek for more general AI systems), but not for all. There is AI beyond Machine Learning: Bayesian Networks, Fuzzy Logic programs, or Inductive Logic Programming systems are typical examples. Even when we are accustomed to hearing about highly deep and complex neural networks, plain deterministic algorithms can be part of an AI system too. Remember: the most Narrow AI is still AI!

So, what makes a grocery Python script and the object detection software from Telsa different? Is it just the complexity?

No, it is not the complexity. It is the purpose. AI performs tasks that simulate human behavior or reasoning. Sometimes the program simulates the human mental process, and sometimes just the outcome because we know little about our own brains. In the end, the AI program is a mathematical model for the resolution of a human task, that simplifies the whole mental process. It is not an exact implementation of the same steps we perform in our heads, but a workaround!

Bringing back the example of the Python script: summing up a list of prices is an already well-determined mathematical operation, that we can describe unequivocally and repeatedly. On the other hand, the recognition and spatial allocation of vehicles in our field of vision is a task that we humans perform with knowledge that cannot be expressed mathematically, and thus requires a model to simulate the results.

AI programs model human behavior and reasoning, or at least their outcomes. Image generated with MidJourney.

The first cannot be considered AI, since it does not model the intelligent agent that performs the task. It just solves a problem, just performs a task, as any other mechanism in human History has done before. There are no more mental processes, no reasoning, no thinking or feeling about the problem.

The vehicle recognition looks more like AI. It simulates the human reasoning that we are not able to completely explain and gives us similar results, using a mathematical model. Even if we tried, with our current knowledge we cannot describe and reproduce the exact mental process. If you believe the trick still relies on complexity, think of simpler tasks, like text recognition: identification of black characters over white backgrounds. AI does that, simulating our own reasoning, even when we humans don’t use “letter recognition equations” when we read.

Conclusion

Even the simplest algorithm can be an AI if it models the agent performing an intelligent task, and simplifies the complexity of our human intelligence. We want AI programs to decide for us, to guide our decisions, to challenge us, or to automate our tasks. But it’s easy to understand that, when we are modeling something extremely complex as human thinking, we will need more and more complex models in the future to improve our results.

AI models will be more and more complex in the future. Image generated by MidJourney.

Do you want to read more about why the term intelligent makes everything so complicated when we talk about computer programs? I hope you like my previous article on this topic:

--

--

Telmo Subira Rodriguez
DRILL
Editor for

MSc in Artificial Intelligence. Electronics & Telecommunications engineer. Science-fiction lover. Passionate about technology, good design, and innovation!