VJ Manickam
8 min readJan 22, 2020

Weak, Strong AI — Do these Terms Matter?

In today’s post, I am going to be discussing Weak, Strong, Narrow, Broad, and General AI.

Artificial intelligence has a broad spectrum of ability and right now many people try to classify AI as either weak, or strong depending on how generally intelligent we want the system to be. So, the more an AI system approaches on the abilities of a human with all of human intelligence, emotions and a broad public ability of knowledge, the more strong we call it. On the other hand, the more narrow in scope, which is specific to a particular application and a particular task, the more weak it is in comparison but do all these terms means anything and doesn’t even matter whether or not we have strong or weak systems?

Let’s start talking about what a strong AI system is. So, I think one good place to start is that there are a number of people that have defined strong as meaning broad, and that means AI systems that are just generally intelligent.

So, the term artificial general intelligence (AGI)means, intelligence on the machine that can successfully perform any intellectual task that a human can perform. So, this comes down to 3 general areas.

One is that the ability to generalize knowledge from one domain to another that the system has learned to perform some task that’s applicable to one particular set of capabilities. General intelligence systems have the capability to take that knowledge and apply it somewhere else.

Another definition of artificial general intelligence term is the ability to make plans in the future based on knowledge and experiences so that general intelligence system will not only be able to respond to whatever it’s been trained to respond to but it will be able to make plans for the future things — the ability to adapt.

The third artificial general intelligence has usually the ability to adapt to changes as they happen in the ecosystem. So, this is one definition of strong and strong as defined by broad and there’s a bunch of things that come with it i.e. — the ability to reason and solve puzzles, represent knowledge and so called common sense. The ability to plan, adapt and tie all these things together into common goals and we haven’t been able to do that yet. So if you have systems that can successfully do all these things, we call strong, because they’re broad. But some people say this definition of strong AI as the general intelligence is not actually even strong enough and just being able to perform tasks and communicate like a human and is not enough to be classified as truly intelligent.

Another definition of strong AI is defined as systems in which units are unable to distinguish between human and machine and that strong AI is defined by the ability to experience consciousness. So many people are commonly discussing this kind of strong AI, they usually bring up 2 test of intelligence and consciousness. So the first test is the Turing test. Where you have a human, a machine and an interrogator, so 3 parties and the interrogator needs to determine which one is the human and which one is the machine and if the interrogator can’t distinguish, then the machine passes the Turing test.

The second test is the Chinese room, and this builds upon the Time test. So it assumes that a machine has already been built and then it passes the Turing test and convinces a human Chinese speaker that the program is itself alive Chinese speaker, and this was introduced in 1980s by John Searle. So the question that Searle wants to answer is, does the machine literally understand Chinese or is it nearly simulating the ability to understand Chinese.

So to just generally recap his test, he places himself in a closed room with an English language book that has instructions and people pass Chinese characters through a slot, which he reads the instructions in English and then provides output in Chinese characters, similar to what the machine would do in the Turing test to prove that he was indistinguishable. He believes that there was no essential difference between the worlds of the computer and him in this experiment because each simply follow a program with step by step instructions and produce a behavior that is deemed intelligent. However, he argues that it’s not really intelligent because at the end of the day, he still doesn’t understand Chinese, even though he’s producing something that people interpret as intelligent.

So, he argues that the computer itself also doesn’t understand Chinese and then without understanding he says that you can’t say that a machine is thinking and then you have to think, in order to have a mind. So, from Searle’s perspective a strong AI system must have understanding otherwise it’s just a less intelligent simulation.

We found this very interesting about artificial intelligence in general that one of the things that uniquely separates AI from all other elements of computing, if you want to think that is how it overlaps with philosophy. Whereas other computing is about, the mechanics of getting systems to work, and data and computing and storage and networking and all that sort of.

We don’t want to dive too deeply into the philosophy of this, because we have to think about how this is going to be applicable to today. Some of the things that we want to continue on on this a little bit, John Searle basically says you can only have strong systems that are truly understanding that truly consciously he does believe you can still build systems like this, so there is far end a strong AI system where the AI is basically used to be able to explain how the mind works. If you use AI to explain how the mind works, and therefore, because you can build AI in the systems, he says that the study of the brain is actually not relevant to the study of the mind. And furthermore he says that you can use the Turing test, it’s actually sufficient to explain establish the existence of mental states.

So anyway, I think now that we have all this clarity on a strong AI, unless we get back to the definition of weak, you know, what is waiting yeah, of course, you can say anything that isn’t strong is weak, but this is not particularly helpful because we haven’t been able to build anything so far that’s really strong.

So, let’s toss out the term weak because it’s not particularly a useful thing about the terms narrow or applied and what we mean by that, as narrow as applied to a specific task. So can we take all the various things that AI can do if we apply to a very specific task and therefore, this intelligence is really not meant or even able to be applied to other tasks, we can think about things like image, recognition and voice recognition, conversational technology, recommendation engines, things that have some modeling of that. And I think pretty much all of our experiences today and had been with narrow and applied versions of AI. Long time ago, we thought wasn’t intelligent was intelligence. Another point that slowly creeping our way up the ladder of intelligence.

So, as technology continues to advance people’s definition of artificial intelligence also advances and take our children’s generation, for example, they have grown up with Siri and Alexa that they know that is their baseline and they no longer considered that intelligent. So now that to them, they want a system that can do more and that’s where it’s interesting because I didn’t grow up with that so to us that is artificial intelligence. But to our children, now that’s their base. That’s no longer intelligent.

So, it’s interesting with cars, for example, and now we have cars that can self park, and that are self-driving. Just a few decades ago people thought the cruise control was a high end technology and now cruise control — if your car doesn’t have cruise control, that’s expected. And now they have cruise control that can sense if your car is getting too close to another car and slowly brakes so that you don’t have to. But again, our children, that’s their baseline. So they don’t consider that artificial intelligence. They want something beyond that.

Even if you define something as weak AI now or strong AI now, that’s just going to keep changing over time. So you can say, well, this technology is weak, well not 30 years ago, it wasn’t. I think from our perspective at all, that isn’t even useful to classify AI systems as strong or weak and even when we use the terms narrow, or applied or focused, that really doesn’t give us any specificity and tell us just how intelligent the system is. Can you actually measure in some sort of a concrete way assistant intelligence without getting using a generic or relative term like it is strong? So when we say stronger, broader general, that doesn’t say much either, especially because we have disagreement with how strong the system needs to be.

For some people, John Searle and the rest of the folks may think about consciousness an AGI system is weak and even though to many people that’s something we haven’t been able to accomplish.

So strong and weak are relative terms that are just like dark and light. How light is light and how dark is dark and it doesn’t really helpful. So we think it’s actually better to define this sort of concept, in terms of the spectrum in particular spectrum of maturity of how intelligent assistant is against the sort of tasks, and range of tasks that need to be done. So, for example, on one end of the spectrum we can have AI that is so narrow and so focused on application to a single task, and it is really barely above what a straightforward from program.

It doesn’t even qualify as AI, maybe it’s like some very narrowly, specific, deep learning task. But anyway, and at the other end of the spectrum, the AI is so much matured, it so advanced that we’ve basically created a new kind of sentient being. We can easily create a species so between these two, we have many degrees of intelligence and applicability and that’s why the terms we can narrow and struggling and don’t mean anything.

So our take is that we’re producing research on this that provides more detail into what we’re calling the cognitive concept of the AI maturity spectrum and this will basically help enterprise users and vendors know how we can apply the spectrum to various AI systems and implementations and the goals of why you want a system to be at a particularly level of maturity with regards to, versus other levels.

So building off that to the enterprise users and vendor readers, we don’t want you to get fixated on this terminology, and we also don’t want you to get lost in the philosophy, so understanding the history of AI and how we got to this point is good to know. But also the vision of what people have in mind for the future of AI is good to know.

So know how the boundary of maturity of AI is evolving as we just talked about and then figure out how your particular problem can be solved with what level of AI maturity. Not everybody needs the super intelligent system to solve some of the basic problems that they have.

So what matters is how you applying the technology, and how it is evolving to meet new needs.

We’re going to look at the capabilities of what these AI systems can do and map these capabilities across the spectrum of what we imagine AI can do, even things that we have not been able to do and of course, we’re gonna keep track of how this boundary and what’s imaginable becomes increasingly more possible.

VJ Manickam

“It’s not what you know that counts anymore. It’s what you can learn.” AI Adult Educator, CX Professional with a passion for growing Citizen Data