Can Machines Think? Alan Turing Edition

If you are going to ask about thinking, you’ve got to ask about whether or not computers can think, and this question can’t be, “can we imagine a computer thinking” — because we certainly can imagine that, and have, over and over again in fiction. And this question also can’t be, “can a computer do something that looks like thinking?” because of course they can, and have, and as it turns out YouTube comments sections have set the bar so low for what constitutes something that looks like the product of human thought, it turns out computers can do it quite easily.

The question I guess is, can a computer do a thing that we would recognize as being analogous to thinking? And a major problem with this question is that we have almost no idea what the hell thinking actually is.

Anyway, today Alan Turing lays out the question in the Imitation Game, a thought experiment that provided the basis for what we call the Turing Test, which is not really a test at all, nor was Turing trying to describe one with it:

If we imagine an interrogator in one room, asking questions of two people in another, and one of those people is actually a machine, can the interrogator figure out which one is which?

But that’s the Turing Test, which isn’t real. The real question here is, can we conceive of a computer that could, for any length of time, sufficiently fool the interrogator into believing it was a person? And another way to ask that might be, can any human being create a set of questions that only a thinking creature could answer, and a machine programmed to answer questions could not?

And it seems to me that the answer here intuitively is no, of course we can’t create such a set of questions. Because look, how might a thing answer questions? It can answer yes or no, of course, which a human being could do, and it might do so rightly or wrongly, which a human would also do. It could refuse to answer on the grounds that the question is stupid or nonsensical or it doesn’t care — all perfectly normal responses we might expect from a human being, but also all fairly easily programmed into a computer. It might just supply a string of bullshit (if asked, for instance, what it thinks about Schopenhauer) — very easy for a human, encountering something outside the bounds of our regular experience, and maybe difficult to program a machine to do, but we can certainly imagine it happening.

The question in itself relies on a notion that the kinds of questions a human being can “answer” (and here “answer” means formulate a response to that seems intuitively human) never really exceed the human capacity for answering things. It seems to me though that human beings routinely encounter questions that are outside the bounds of our ability to answer, and when that happens we have a number of instinctive or reflexive (or customary, as D. Hume might say) behaviors for responding to them — dissembling or dismissing or changing the subject, as you like.

This is all fine, but it doesn’t really, I think, address what Turing was trying to get at. In his section on Learning Machines, I think he seems to be making an argument that we can, eventually, build a machine that can think in a way we’d recognize as thinking, but the part I’m interested in for right now is this part, about the Argument from Consciousness:

I like this one because it reminds me of the qualia question. We can know the thing a computer says, but until we have direct experience of itself, there’s no way we can tell the difference between simply signifying actions based on emotions, and actually having them, and that a sufficiently advanced computer might be able to signify all sorts of things, or at least enough things as to be CEFGW.

(There are some people who will argue that, while we can imagine something like that, in fact some things are simply too sublime to have been created without feelings — the works of Shakespeare or the Bible or, I dunno, Mozart’s symphonies or something. That could be! We definitely haven’t built computers that can approximate them yet! Though, contrarywise, a lot of regular human beings can’t make those things either, so this raises the somewhat uncomfortable question as to whether or not sublime creation is actually the product of super-human thought, and/or whether or not it is the product of regular human thought and the rest of us are simply sub-human.)

But so, I’d like to take this apart and worry around the idea of what “thinking” is. A computer, of course, doesn’t have an emotion or a want or anything like that — computers don’t want anything, they just do as they’re told. We can tell them to want something, of course — we can program a calculator to ask for numbers to calculate, but is that the same thing as wanting something?

Well, I don’t know, but what does it mean for me to want food, for instance? I don’t have a personal recollection of learning that food is good and, after observing my own baby for some time, it doesn’t seem to me that human beings actually start by being consciously aware that putting food in our mouths is a solution to hunger. It seems, actually, that we’re programmed to do it, by instinct (and, thank you David Hume, yes, later by custom).

I suppose we could say that a programmatic want by a programmer doesn’t count in the same way that instinct or custom as learned behaviors do, but I don’t think this holds water very well. Let’s put a pin in it, maybe we can come back to this idea.

So, in a certain sense, a machine can be programmed to want some things in the same way that I am programmed to want some things — a part of me acts up when it encounters a particular kind of stimulus (for the calculator, a bunch of un-added numbers, for me a juicy cheeseburger), this acting up directs certain behaviors towards the satisfaction of that want, and then, having achieved it, some other signal is sent out indicating that want has been fulfilled (i.e., my stomach tells me that I’m full and floods my system with endorphins; the calculator’s internal system is informed that it’s fulfilled it’s daily requirement of summation).

We wouldn’t call this thinking though, either when the calculator does it, or when I do it. (I suppose I do a lot of things without thinking about them, you might even go so far as to say that of all the things I do, “thinking” is actually the smallest part of it.) A fair point, so when do I think about something?

One time I might think about something is when I’m faced with a choice — maybe I’m hungry, and I have two options to eat: a cheeseburger or spaghetti. And then I assess which one I’d prefer to eat, or I measure it against some standard (how many calories should I be eating, is it wrong to eat meat, et cetera), and then I make a decision about that. We could call that thinking, couldn’t we?

We might also call it thinking if I have two desires at once — the desire to eat at a local hamburger stand and the desire to not go out on a chilly, rainy autumn day — and that this process engages to help me make a decision between these two things. Is that thinking?

Well, it’s not writing a sonnet or being moved by jealousy, but I don’t suppose it takes a huge stretch of the imagination to think that if we piled up not just two different desires but ten or twenty or a hundred instinctual desires, and then piled on top of them thirty-odd years’ worth of customary responses to them and the circumstances in which they occurred (for example, by now out of habit I don’t go out in the rain, and no longer need to actively say to myself, “but if I go to get a hamburger, I’ll get cold and wet”), and then also living in a world in which there are multiple options to satisfy those desires and also a world in which the material that my senses perceive doesn’t always agree, and then on top of THAT we put this mechanism that’s meant to sort of harmonize all of these different things —

what I’m saying is, I don’t it’s a huge stretch of the imagination to say that “producing a sonnet” might very well be a kind of solution to this vast tangle of oppositional forces.

If we go back to our calculator, of course we don’t call it thinking when, in response to its interior programming, the calculator demands numbers to add up, but what if it had two desires? What if was also programmed to, I don’t know, let me nap if it saw me sleeping? So it’s got these two desires, and a third mechanism that, when it notices the desires are in conflict, has to determine which one is prominent? The calculator, ever-hungry in its quest for numbers, encounters me napping on the couch. What should it do? The regulating mechanism kicks in — the calculator hesitates, because it is programmed not to wake me up. It canvasses its recent memory, and recalls that when this same situation occurred yesterday, I woke up but gave it no numbers to add together. To avoid that, the mechanism suppresses it’s adding desire for now, then wheels off into another room to find numbers somewhere else.

Is…that thinking? It seems like thinking. It seems, at a very simple level, to be at least analogous to something that I do and which I would call thinking.

Well, it’s clear then that we could create a mechanical process that might duplicate a certain kind of thing that we could reasonably call thinking, but if we want to know the answer to this important question, we might also have to ask, is thinking anything that isn’t this process?

(Maybe John R. Searle knows the answer, he’s next.)