The tech industry had its trends over the years. When I was young, the big thing was just owning a personal computer. Every year a new type of computer would pop up, with a different operating system. Later, with Macintosh computers and the windows OS (or Amiga, for anyone who remembers), mouse driven OS became a thing. Then came the internet, smartphones and mobile, social media, etc. Now days, I think a lot of it is about machine learning, artificial intelligence and big data. Who knows what will be the next trend. All is well, nothing to report.
Well, not quite. Machine learning, neural networks, and artificial intelligence are nothing new. The existing theory and much of the software behind them was developed a long time ago. With big data, we found new profitable usages of this technology, which made many people rich, put some people in office, and pissed off others, but we haven’t really solved what is referred to as “the hard questions”. Roughly speaking, we don’t really know how what we know of machine intelligence maps to our experience as conscious agents, and if we’re honest about it, our intuitions are confidently saying “no, no, no”. Neural networks don’t “behave” anything like us. At best, they represent what is going on in the background as we learn. They don’t explain how we think or feel.
I am saying this from my experience as a software engineer that studied a bit of machine learning and philosophy in the university, and from what I picked up here and there. I have written 2 books that deal with similar subjects (among others). That is to say, while I am really not an expert in machine learning or artificial intelligence, this is not my first rodeo. I admit, I don’t know what companies like Google, Amazon and Facebook have discovered in their machine learning projects, so there might be a lot of groundbreaking, paradigm shattering theories kept private, and which could make my thoughts on the subject completely outdated. Still, I feel this line of thought resembles more a conspiracy theory than anything else. For the purpose of this text, I will assume all they do is more of the same, even if at a much greater scale.
Back to the beginning, again, artificial intelligence as an application of computer science is nothing new. Alan Turing, who invented the computational model that sits at the core of contemporary computers, is the same Alan Turing that invented the Turing test, the test to determine whether computers reach human level intelligence. That is to say, computers aim to mimic human intelligence, and to large extent, human consciousness. It is in their “DNA”.
Over the years, achievements were made. Computers have passed the Turing tests. Machine learning is powering the business models of tech giants. Self driving cars are almost here, and there’s lots more going on. For many, this is a great victory. Still, we made some unexpected discoveries, which opened the field to different interpretations.
In a nutshell, we can’t figure out the role of our consciousness. We found that the way we feel is an outcome of the chemical state of our brain, which basically mean that emotions are reducible to non conscious materials. There are more and more contenders to the claim that “the self” does not exist, that our mental world, possibly subjectivity itself, are fictitious. We discovered that brain activation correlated with our choice of action comes before we become conscious of it, suggesting free will is an illusion. Our consciousness does nothing, it’s an epiphenomenon, it’s just there for the ride.
Artificial intelligence researchers and engineers absolutely love this assumption. They were trying so hard how to solve the hard questions of consciousness (namely, how does the brain give rise to the sensation of subjectivity), with no success. Now they think they don’t have to. There is no hard question. It’s all a mirage.
But is it? Biologists claim this doesn’t make sense. Consciousness is so far from trivial, it can’t just “emerge”. It has to be the product of a long evolutionary process, and evolution is never this wasteful. Still, that’s all circumstantial. “It’s a bi product of complex systems” they say. “It doesn’t really matter”.
Well, I would like to suggest a different explanation. If our conscious thoughts come after we decide how to act, then that’s that, sure. But that doesn’t mean it’s epiphenomenal. It doesn’t mean we have no free will either. it simply means consciousness kicks in only after we make a decision. Consciousness is a “post process” of decision making. It evaluates the decisions the brain makes, changing the brain so that the next time it makes a decision, it would be more to our conscious liking. In other words, consciousness is here to judge the decisions the brain makes, and to judge effectively, it must consider the outcome of such decisions. If you have some background in machine learning, you can think of it as somewhat similar to the “reward function” we use to train neural networks.
While I don’t have the resources or intent to prove this is so, arguably, it’s not that difficult to accept. I mean, we know this is the case. We know how it feels to be us. We think “this was the right action”, “this decision was good”, “that decision was bad” (let’s not do it again), “I like this”, “I don’t like that” etc. We do this over and over every day all day. Those of us who know something about machine learning also know that to train a neural network, we really must have a reward function, so there. Consciousness provides us with a “reward function”, of sorts.
Still, isn’t that exactly what neural networks do (as in, given an input, a neural networks makes a decision, or what is otherwise referred to as “classification”)? Well, yes, but this is to be expected. I mean, this is what controlling agents do. They make decisions. Moreover, it is common to have multiple biological systems that help each other while doing similar things. However, this alone does not mean that consciousness is “doing something”. The sensation of having a consciousness could be just the side effect of a biological neural network doing its thing. Yes, it might be this side effect occurs only after the fact, but that’s more of an anecdote.
Well, I think there’s a bit of a barrier here, caused largely by what is known in mathematics as orthogonality. Let me explain what I mean. For the sake of argument, let us just suppose that some of the mechanics involved in generating consciousness are not materialistic. They don’t happen in matter (or energy), and so no experiment done in matter could prove they exist. Still, we know they affect consciousness, and as I just suggested, consciousness affects behavior by judging behavior.
Now, we could say that if we found such elements of consciousness that don’t have materialistic manifestation then that would be proof that materialistic computers cannot be conscious, but it really isn’t. First, potentially, such non-materialistic components of consciousness could be synthesized by a materialistic computer. I mean, there are many things we learned how to do by mimicking biological systems, and which we synthesize in a lab in a very different manner. Still, the second, far more difficult point is that we don’t know what we won’t know. To claim the existence of non materialistic mechanics is something we don’t know to be true, and we don’t know if we won’t discover this to be false somewhere down the line.
Well, that is all fine, but I don’t think it’s convincing. I mean, we know there is an element to consciousness that so far we just can’t reduce to anything mechanistic, and that is “meaning”, semantics, “what it feels like”. We have not found any way to go from algorithms and syntax manipulation to semantics, even though we know it exists in our consciousness, and even though we know it plays a very significant part in what it means to be “us”. So what I am saying is, if we can find a role for consciousness, and we can find something consciousness does that we just can’t reduce to anything we can spot neurons doing in our brain, then it is scientifically reasonable to assume this role exists, and is different from what the brain does materialistically. While this is clearly not proof, I would submit it is in no way more controversial than the alternative, namely, that we just don’t know yet, and it might be reducible to materialistic manipulations. To summarize, I am suggesting that consciousness is not a side effect of neural networks doing their thing, but rather, it compliments them with semantic classification.
I know how all of this might sound, but really, it doesn’t have to be spooky. I am not saying consciousness is some supernatural boundless force. At its core, it does the same thing again and again, just like neurons fire again and again. It simply utilizes non materialistic features of reality to do its repetitive job, again and again and again. And if we understand what it does, we might be able to reproduce some of it in artificial intelligence, even without implementing actual semantics. In other words, talking about this might actually be useful. If you’re working in artificial intelligence, it might even make you some money (this is a lie, it probably won’t).
Ok, so what could consciousness be doing, really? Well, I suspect the following. I know from my experience that the words I use somehow carry with them their semantic summary. To clarify, when I say the word “banana”, I hear how it sounds, I see a banana in my head, I see how it spells, I remember the taste of a banana, I remember how it smells, I remember some times that I bought a banana, etc. The more I think about bananas, the more everything I know becomes somehow related to bananas, if not simply by the fact that I was thinking about a banana while other things happened. There’s this thing about consciousness where things appear as clusters of experiences, merged into one.
I therefore speculate this quality of consciousness is tightly related to what consciousness is doing. Roughly, as consciousness encounters sensations, it attempts to assimilate them into its associative web of semantics. If it succeeds, great. It lets the brain continue doing whatever it was doing. If it fails, it signals the brain “This isn’t working. Please do something else”. The same applies to thought (another mystery in the field of artificial intelligence). When we think, consciousness is telling the brain, “This is great, ship it!” or “I don’t like this. Please recompute using this neural pathway, and show me what you got”. This model works well explaining why when we master a skill, we don’t need to give it attention, as everything it includes is already mapped to our conscious semantic model. Consciousness doesn’t have anything to add.
This is all straightforward classification, and so again, it invites the question “Why do we need consciousness to do this?”. Well, while as I said, we could try to integrate this into artificial intelligence, it is important to realize what’s being done here. Yes, there is decision making, there is classification, but it is based on semantics. It is based on the hypothesis that the stuff the brain handles are not just particles and molecules — in consciousness, things appear as semantic elements. There simply isn’t anything like this in a neural network, because it’s just a “machine”. Moreover, the quality of semantics, where everything “leaks over” to everything, where everything is everything else — it doesn’t “feel” like a spatially spread out thing like a neural network in the brain. In natural neural networks, signals are propagated from neuron to neuron, like a long convoluted train. Combining that with the fact we can’t isolate any part of the brain where consciousness is located, it seems highly unlikely that consciousness would use the structure of neural networks as an operational foundation. It seems more likely that consciousness operates in a way somewhat similar to systems in quantum mechanics. When the conscious computation “ticks”, everything it computes “ticks” together. That is to say, there’s no reason to think neurons don’t use some sort of funky attribute of quantum mechanics — but if they do, it might be unrelated with the physical structures neurons form in the brain. Back to computer science — it might be completely unrelated with what we know as machine learning.
Note, I am not claiming quantum computers generate consciousness. There’s very little reason to think that. I am merely suggesting that consciousness might use attributes similar to those found in quantum mechanics to “connect” the materialistic neural network classifier, and the conscious classifier. In simpler terms, I’m saying “we’ve seen things like this before in quantum mechanics, so let’s not dismiss this out of hand”.
I agree this is all very hand wavy, and so it begs the question “why did I bother writing this?”. This is hardly enough to make any substantial progress in artificial intelligence research, so why should we care? If a computer already managed to pass the Turing test, who cares if under the hood, artificial intelligence is very different from human intelligence?
Well, I guess I’m writing this as a reaction to how the progress in artificial intelligence research made us think about who and what we are. The smarter we seem to be, the more we look down on our consciousness, as if it’s a flawed computer. We don’t know if any of this is true, while we dismiss so much of our experience, and all I’m saying is, the game is definitely still on.