Understanding AI: Intelligence in Practice

keira
5 min readJan 4, 2023

--

How smart can an AI system really be? Recently we’ve seen AI technology exceed its capabilities (and our expectations) again and again, from AI-generated art to AI-directed movies…it seems there’s no limit to what AI can do in the future. And that might be true.

When it comes to the practical use of AI though, the notion of intelligence is a different story. What the word “intelligence” entails in the process, and how much “intelligence” we can or should expect, are all worth a discussion even though the topic is both extremely broad and deeply arguable. And yet, despite its difficulties to be discussed, when business or modeling decisions are being made, these remain essential questions to consider.

What is intelligence anyways, from the point of view of practical AI systems? And why does it matter in AI practices? In order to answer the why, it is important that we explore the what question first by comparing characteristics of ‘smarter’ solutions to less intelligent ones. Only if we understand what intelligence represents in such systems, can we better recognize and utilize intelligence to serve our practical purpose.

First, intelligence is about automatically applying the right approach to solve the right problem.

In this way, solutions are formed by the system’s analysis of the problem, rather than the system simply and rigidly following a rote set of instructions.

Take the classic classification problem of cat vs dog. To classify, we can either (1) look at pictures of cats and dogs, or (2) listen to how differently they sound. A system that can approach the problem in both ways, and make its judgment when to use which method, is without a doubt much smarter. Such a multifaceted path requires the system to be able to access different forms of data, and seek the most appropriate classification avenue given the context.

As simple as this statement is, in practical terms, it is actually very difficult to achieve, since most of the time, the availability and capture of one form of data is already limited — let alone a variety of data. However, if you’re blessed with different forms of data, such as textual, image, audio, video, …etc, or of different sources, such as operational, marketing channels…etc., you are off to a great start, in that you already have the prerequisites to construct a smarter solution.

It is important for AI practitioners to remain cognizant of this key aspect of intelligence, and partner with each other so that we are able to detach solutions from problems — as needs arise, we won’t limit ourselves to simply optimizing or bettering an existing solution. Rather, we will be able to search outside any existing parameters or confines, to solve the problem more effectively.

Another characteristic of an intelligent system is that — such a system can reach a comparable level of success with the least amount of inputs and resources.

At Ekohe, this is the most familiar use case to us — most often, we have very limited access to available data, especially the kind that is well-labeled and of good quality. Models and frameworks that can work effectively with limited data are the most exciting ones to us — not only because they require less, but more importantly their generalization capabilities are so strong that we can expect less fallibility in the handling of future instances. Thus, these technologies should be considered intelligent and incorporated when needed.

It is a welcome surprise that even with pre-existing data limitations, we can construct not only a working solution but often a more robust one. To infer further, “asking for more” is not always the smart way to go.

Intelligence and creativity can be born with limitations, and these limitations should be welcomed from time to time.

A final and very important point with regard to intelligence — one that often tricks us is that intelligence has no direct correlations with the complexity of the system.

Intelligence is neither simplicity nor complexity.

Herbert Simon developed an enlightening concept, appropriately called “Simon’s Ant:” when you observe an ant on a beach, you might puzzle yourself with the complex path of the ant. You’d struggle to figure out why the ant is moving in different directions at each step. Perhaps you’d determine that in order to form such a complex route, the ant must be very smart. But if you look closer, you would find tiny rocks blocking the ant’s way and the ant is only trying to avoid them. The complex curve of the ant’s path actually results from the very simple act of the ant from a very complex environment.

The complexity of any system is actually the combination of the complexity of its environment and its program (solution). Even within very complex environments, simple solutions can activate great results, and so they are often sought out. However, when we look at a system in its totality, we really can’t properly assess how intelligent (or not) it actually is unless we account for both.

With all these factors in mind, let’s look at our questions once again. From the point of view of AI systems, intelligence often implies the quality of a solution. There might be several models or solutions that can solve your problem in its current state and setting. And sometimes the same level of potential success is achievable through many of these models… you may not know which one(s) to choose. Or, there is no obvious way for you to learn the level of success before carrying out an initial plan, and maybe you are not quite sure where to look first (or second) to recommend or justify the effectiveness of a proposed solution.

By keying on the factors we’ve identified in this article, and understanding intelligence against these factors, We have got ourselves a set of starter questions — a much-needed set to point us in the right direction to solve our goals.

  • What is the baseline approach?
  • Are there multiple approaches to the business problem?
  • Is the system designed with the notion of accommodating different methods, even if some might only be ready in the future?
  • How many inputs/resources are needed to be successful for the method(s)?
  • How much can the system account for future data? Is robustness in the picture when evaluating the system?
  • What might be the difficulties in the problem setting? What technical highlights are leveraged and how are they helping solve the problem?
  • Are we oversimplifying the situation or overcomplicating the situation?

AI technologies amaze us all the time. We are amazed by their growing capabilities, and sometimes it feels like they can become smart enough to match human intelligence and, if you believe the naysayers, even destroy ourselves in the future.

I still remember this one class from my university years. After showing us a very smart chat box and explaining what happened behind the scenes, the professor said: “Now this doesn’t seem to be smart at all, does it?” And we all laughed…because we recognized the truth behind his words — When we understand something, its intelligence seems to vanish. So I guess, to better utilize the intelligence of AI systems, we’ll need to let its intelligence vanish first by trying to understand it.

--

--