My Personal Chinese Room
The Blurring Line Between Biological and Digital Intelligence
If you’re not familiar with the Chinese room thought exercise, here’s the tl;dr — if a computational program can take an input and produce the exact same output that a knowledgeable, typical human might produce, is it actually thinking? Or is it just responding via a learned or programmed set of rules, and thus just having the appearance of thinking without actually doing so?
(Or for a more entertaining tl;dr — check out this link. Throwing a lot of love at Saturday Morning Breakfast Cereal in this blog post.)
If the computer is nothing but a rigid, extremely thorough set of rules put together by a thinking entity, then the answer is clear: the program is not thinking — it’s just doing what it’s designed to do. All of the thinking was done in the process of putting the program together in the first place.
However, what if the rules aren’t hard-coded? What if, say, a complex neural network was set up and trained on mass volumes of data (the totality of the Internet, for example), and had to learn over innumerable trial and error exercises what correct responses might be? What if the the thinking beings that set up the system just set the hardware and basic logic in place, and everything else this program did was a learned behavior, which could be nuanced given a wide range of variables? What if we didn’t know **how or why** the computer program responded the way it did, but the way it responded was similar to how we would expect a knowledgeable, thinking being to respond?
How would you tell the difference between the conversations you might have with this AI-based program vs. the hard-coded but extremely thorough program in the first example, assuming nuance, not repeating answers verbatim, etc. were part of the thoroughness of the developers?
More to the point: how would you tell the difference between the conversations you might have with the AI-based program vs. a similarly knowledgeable human being? Or, to put an even finer point on it, a conversation you have with yourself as part of an internal monologue?
This is a really deep philosophical discussion, and not usually the type of thing I write about here, but with recent events in the news, I thought it might be worth a quick write-up. We are all computers, of a sort. We start with a base set of knowledge and understanding, and then through trial and error + reinforcement learn how to do things like walk, talk, manipulate objects, and write blog posts. (For some of us, that last one is more error than others…) We perceive what we are doing as a set of conscious decisions, but the actual process of “thinking” is just a set of chemical / biological reactions in response to a set of given stimuli, and even your objections to that idea — including the example you provide of a time you were going to do one thing and changed your mind, and thus have “free will” — are chemical / biological reactions in response to a given set of stimuli. That fact that you can perceive youself as creating these stimuli doesn’t change the fact that the reason you do that is in result to some stimulus.
When you break it all down, while it’s nice to feel like we are in charge of our own destinies and all (and feeling that way is part of the stimuli that drives our behavior) at the end of the day, we are all just Chinese rooms. What we call “thinking” is an evolutionary adaptation to our environment that has resulted in (so far) relative advantage in terms of the proliferation of our species overall. What we perceive as “consciousness” is very much the same kind of thing: while the perception of consciousness has been part of our evolutionary development, in the end it can all be traced back to a series of chemical reactions in a biological system. Change the reactions, you can change the experience altogether. Change the biology, you change the experience altogether. If it’s all just biology and chemistry, responding to stimuli both real and imagined, then what is the difference between that and a digital neural network composed of billions of nodes, responding to stimuli both externally and internally generated?
The human bias towards thinking of ourselves as unique has been slowly unwinding in the biological world. We have been forced to recognize self-perception in birds, use of relatively complex language in other primates, and a variety of thinking / feeling / anthropomorphic behaviors throughout the animal kingdom. It will only be a matter of time before we are faced with the reality that our systems of thinking / feeling can be replicated by adequately complex digital systems. Are we there today? I don’t know. But I believe it’s foolish to think we will never get there, and honestly, that we’re not going to get there soon (assuming it hasn’t happened already.)