Can AI Ever Be Conscious?

Khun Yee Fung, Ph.D.
Programming is Life
5 min readJun 24, 2024

Well, I saw the article “Why AI Will Never Be Conscious”. I mean, it definitely tickled my interest. I did not feel like commenting on the article, as there are too many caveats, too many assumptions, implicit or otherwise, to sort out. In a comment, that is more or less impossible to do. So, I hesitated, thinking I would just let it go. Still, it sort of got under my skin a little not to write something about it. So, here goes.

First, the title has quite a few things that require explanations. The term “AI” obviously is one of them. There are many revenues to get to a conscious device. And the current layman understanding of the term “AI” has none of them. So, technically, the title is correct, AI can never be conscious. But of course, that is not the underlying assumption of the title. Instead, it is really about whether we can create a conscious device/system with the current understanding of what a computer is. Well, that brings out still more ambiguous, undefined, implicit, assumptions, understandings, and biases. Who says the current von Neumann architecture based computer must be the only way to do computing? It is just one of many possible ways to do computing. It just happened to be the dominant one right now. I mean, people are talking about quantum computing, right? I am not saying quantum computing is going anywhere. It might or might not. I only know that when it happens, it is not going to be that much more powerful than the current architecture, or, like fusion power generation, perhaps close to impossible to implement. But, still, it is an example that von Neumann architecture does not have to be the only way to do computation.

Oh, who says consciousness must be realised with a computing device. Sure, tons of people say our brain is like a computer. But it is a metaphor, people. It is not meant to be literal. Our brain is NOT like a computer. There is almost no commonality between our brain and a contemporary computer. A brain has billions of neurons, each of them, yes, EACH of them is an autonomous agent. If you like, each a core of a parallel computer. Yeah, go build a computer with billions of cores. About the size of a our brain too.

And what is consciousness? Okay, I am not going down that rabbit hole. Seriously, even though we all laughed at the Google employee saying some Google system that he tested to be sentient, nobody can definitively say when we can say a system to be sentient. That Google employee was laughed at only because that Google system that is supposed to be sentient is obviously too crude and too not sentient to be called sentient. Until we are on the other side of sentience by quite a bit, we would not know where to draw the line. In this world, everything at the border is fuzzy. Even life and death at the boundary is very hard to differentiate. Same with consciousness.

I don’t subscribe to panpsychism. I just don’t. Again, I am not going down that rabbit hole as well. Let’s agree to disagree. So, yes, there is a boundary, fuzzy as it is, where we start getting consciousness out of an artificial device.

Now that the title has been completely modified, what do we get? Okay, how about “Can an artificial device be cognitive?” And let me define what a cognitive device is. It is a device that can cut through noise (like the current neural network based AI, only better, as it must be a broad system, not the narrow, single domain, systems that we have right now), reacts to events happening, either immediately, or delaying appropriately sufficient amount of time. It is grounded, so that it understands the meaning of the events happening. The understanding does not have to be correct, like a dog thinking that its image on a mirror is another real dog.

That is the baseline. Dogs are cognitive. Quite primitive by our standards, sure, but they are cognitive. You can send them to space inside a space craft, and they will adjust, correctly or incorrectly, according to their grounded understanding of what their environment it, wherever it is.

Of course, human level of cognition, especially with the ability to employ languages, would be wonderful, but I don’t think that is necessary for the discussion here.

So, can we create a device/system that is on the same level, cognitively speaking, as a normal dog?

My answer is yes, we can. Is that thing conscious? Well, it is as conscious as a dog. Are we close to creating something like that? No, we are very far from it. Can we create such a thing without creating a biological system? I tend to think yes, but it will have to be a lot of more sophisticated than what we can do right now. The ability to self-repair, for example, that every organism has, to various degrees, is already a huge interesting issue. I am not talking about a robot sourcing materials from the environment to replace a faulty or damaged part. That can be done much sooner than the ability of every organism’s ability to self-repair by synthesizing the materials for the repair within itself. Like having a mechanic shop inside the body. That is it, if you can create a robot that can create PCBs or electronic components that it needs, inside its own body, to replace/repair a component, then we will have a device with the equivalent kind of ability.

There are other abilities that we take for granted that can be very difficult to put into an artificial system. All organisms strive to prolong their own life, in all kind of manners, because they all have one thing in mind: to continue its “blood line”, even for organisms without blood. So, there is a fundamental “reason” for being. And because of this reason, we have natural selection to select those organisms that adjust and adapt the best for their environment to leave behind offspring. Can an evolutionary mechanism like natural selection be possible for a reason for being other than to leave behind offspring? I don’t know. But it is certainly a fascinating topic to think about. Especially if the mechanism is not natural selection. What would it be for artificial devices?

Are we close to creating something that can trigger an evolutionary mechanism similar/analogous to natural selection so that they become truly self-sustaining? I think not. Not for a long time. Is it possible? Maybe? I don’t know. Most probably, I am guessing. But this is much more than creating a single instance of a cognitive device equivalent to a dog, say.

--

--

Khun Yee Fung, Ph.D.
Programming is Life

I am a computer programmer. Programming is a hobby and also part of my job as a CTO. I have been doing it for more than 40 years now.