Can Machines Think?

Rounak Banik
The Startup
Published in
12 min readJul 2, 2019

Disclaimer: This post is part of a series of essays I wrote at the Young India Fellowship. This particular essay was written as part of the course titled Philosophy and Cognitive Science.

Alan Turing

Alan Turing’s Computing Machinery and Intelligence is a landmark paper that set the foundations for artificial intelligence and other related sub-fields of computer science. The paper introduced the imitation game (otherwise known as the Turing test) which Turing claimed, was a closely related but unambiguous counterpart of the question ‘can machines think?’.

In the paper, Turing quite openly states that he is optimistic about machines performing well in the imitation game in the near future. More concretely, he states that by the end of the century, he is confident that interrogators will have less than a seventy percent chance of correctly identifying the machine within five minutes of conversation. Therefore, Turing is of the opinion that machines thinking will not be an absurdity in the future. However, Turing is prudent in considering the opinions of the opposing party (who believe machines will never be able to think) and providing counterarguments to the same. This paper is a critical review of this debate between Turing and philosophers/schools of thought that disagrees with him.

The Theological Argument

The theological argument can be broken down as follows.

P1: Thinking is a function of man’s immortal soul.
P2: God has given an immortal soul to every man and woman.
Sub-Conclusion: Man can think.
P3: God has not given an immortal soul to any other animal or machine.
C: Therefore, no animal or machine can think.

Turing first attacks the soundness of the argument. He questions the factual basis of P2 and P3. More specifically, Turing argues that the bestowment of a soul to a particular entity is dependent on religion. For instance, Christians believe that women have souls but Muslims do not. Therefore, no religion can universally agree on the distribution of the immortal soul between various kinds of beings. Since there is no universal acceptance, there is a good reason to believe that the premises P2 and P3 are false.

Turing then proceeds to attack the immortality of the argument. He argues that although the premises and the conclusion may be true at the present, it may not be true in the future. The fact that God has not given an immortal soul to a machine does not imply that God will not bestow it in the future. If we assume that God won’t, it would place a question on the omnipotence of God, thus rendering the argument moot, at least theologically speaking.

P: God has not given an immortal soul to machines.
P: God may give an immortal soul to machines in the future.
C: Machines may be able to think in the future.

In his final line of argumentation, Turing argues that since a lot of theological arguments have been proven wrong in the past, it gives us reason to believe that this argument will be proved wrong too. Theologians do not have a very good track record of providing accurate models or predictions. Turing bases this argument on his inherent assumption that past achievements and failures are a very good indicator of future performance.

The earth-centric model of the universe.

P: There is a theological argument whose validity has not yet been proved.
P: There have been several theological arguments in the past whose validity has been disproved.
C: There is a good reason to believe that the theological argument in question will be disproved.

I believe Turing’s line of reasoning is adequate. The theological argument makes an extremely strong statement when it says ‘no machine can think’. There is an implicit assumption made that this statement holds true for all times. When we say A cannot do B, we imply that A cannot do B now or in the future.

It is possible that some of the premises of the theological argument are rendered false in the future, thus making the argument moot. This is very plausible in the field of technology where the scale of development knows no bounds. The sophistication of technology is not static. It is constantly increasing and due to its ever-increasing growth, there is a good reason to believe that machines will eventually be complex enough to be considered as having an immortal soul. In other words, the theological argument is unlikely to stand the test of time.

The Heads in the Sand argument

P1: Man is superior to all other creations because he can think.
P2: If machines can think, man would lose his commanding position in the universe.
P3: It is not possible for machines to have a superior position to man in the universe.
C: Therefore, machines cannot think.

Turing does not bother to provide a refutation to this argument and it is possible to see why. The premises of these arguments are extremely weak, thus threatening its soundness. The most problematic premise is P3 which implicitly assumes that man is the greatest creation and comes from theological beliefs that place man at the center of the universe.

However, advancements in science have shown that our place in the universe is quite insignificant. We have moved from the earth-centric model of the solar system to the heliocentric one. We have acknowledged that we lie only in the periphery of our galaxy. And we’ve also realized that our galaxy is only one of the hundreds of billions in the observable universe.

The Drake Equation to compute the number of technologically advanced civilizations. Correction courtesy Edwin Heredia.

Probabilistically speaking, there is a very good chance that sentient beings such as ourselves exist in some other place in the universe (otherwise known as the Fermi paradox). Our ability to think, therefore, may not be necessarily exclusive. Therefore, we have good reason to believe P3 is false because man was never as superior as he thought himself to be.

The Mathematical Objection

There is a class of questions (henceforth referred to a Q) which is of the form “Consider a machine specified as follows… will this machine ever answer ‘Yes’ to any question?”. A machine’s answer to Q is always either wrong or not forthcoming if the machine bears a simple relation to the machine described in the question Q. With this foundation, the mathematical objection can be stated as follows.

P1: Machines can never answer Q correctly if they bear a simple relation to the machine described in Q.
P2: Humans can answer Q correctly.
P3: There is a limitation to the thinking of machines that is not applicable to humans.
C: Machines can’t think in the same way as humans do.

Turing argues that the feeling of superiority that men get from answering Q better than machines is extremely illusory. This is because humans are known to display these tendencies too. Humans also individually have a class of questions R that they answer wrongly on the basis of their wrongly held beliefs and dogma.

Also, this does not establish that humans are superior to all machines. Machines that do not bear a simple relation to the machine described in Q will be able to answer the question correctly.

I think Turing’s refute to the mathematical objection is sufficient. Machines not being able to answer Q can simply be viewed as a machine’s implicit bias; a tendency that humans also suffer from.

P: Machines answer Q wrongly on account of their relation to the machine described in Q.
P: Humans answer R wrongly on account of their biases, beliefs, and dogma.
P: Both machines and humans have biases in their thinking.
P: Parallels can be drawn with machines answering Q and humans answering R.
C: It is not possible to state that machines think differently to humans on account of the former’s inability to answer Q.

The Argument from Consciousness

P1: Machines do not have feelings, emotions, or consciousness.
P2: An entity which does not have feelings, emotions, or consciousness cannot think.
P3: Therefore, machines cannot think.

Turing dismisses this argument as being solipsistic. He says that, if this were true, then the only way to know that a machine could think is to be that machine and feel itself thinking. Likewise, Turing states that people who hold this belief would also be of the opinion that a man thinks only if s/he can be that man.

Unlike Turing’s refutations to the previous three objections, I believe his refutation here is not that potent. Being skeptical about a machine’s ability to think does not automatically make me a solipsist. Furthermore, being skeptical about a machine’s ability to think does not naturally extend to skepticism to another person’s ability to think.

It wouldn’t be a reckless assumption to postulate that humans find it easier to believe that other humans think than the fact that machines think. Philosophy and other related branches of science are yet to give a complete picture of how humans think. Therefore, it leads us to question how one can build something that they do not fully understand in the first place.

Consider a black box which gives us an output for a particular input. Furthermore, it is not possible for us to compute the output given the input. In other words, it is not possible to know the output of the black box for a given input without passing the input into the black box and recording the result.

Now, if we were asked to construct this black box, the most reasonable way of doing this would be to take a list of all recorded input and output and construct the box in such a way that the mapping occurs in the correct way. However, doing so does not guarantee that for unobserved inputs, the original black box and our constructed box will give the same output. Therefore, it would be premature to ascertain the two entities to be the same.

The same is true for the human mind and a machine. It might be possible for the machine to replicate some of the processes and outputs of the human mind but it is likely to fail (i.e differ from the mind) in the event of a novel, unseen stimuli. Therefore, machines may never be able to think like humans until and unless we completely understand how humans think.

Just like the theological objection, however, this argument may not stand the test of time. Once we do completely understand how the mind works, we may be able to construct it. The mind would no longer be a black box. In such a scenario, P1 would be rendered false and the argument would become moot.

The Disabilities Argument

The disabilities argument is merely a disguised form of the consciousness argument. Therefore, the argument and Turing’s objection will be treated very briefly in this section.

P1: Machines cannot do a cognitive activity X (fall in love, feel angry, make mistakes, etc).
P2: Humans can do a cognitive activity X
C: Machines cannot think the same way as humans do.

The premise of P1 is rooted in the human bias of induction. Since a human has never seen a machine do X, s/he naturally assumes that no machine can do X. However, it is possible to simply program the machine to do X by giving it more storage. Therefore, the criticism of machines lacking diversity of behavior can simply be rectified by giving it more storage and making it do more things.

There is however one important caveat here that was explained in the previous section. A machine can be programmed to do X only if we completely understand the working of X. If X is a black box, it might not be possible to build an exact replica of the way humans think.

As always, this state of partial understanding is not immortal and is extremely prone to change. Once we understand the entire process of how we fall in love and feel angry, it is not far fetched to postulate that we can program machines to do the same too.

Lady Lovelace’s Objection

P1: Machines lack originality. They can never do anything really new.
P2: Humans have originality and can do new things.
C: Machines can’t think the same way as humans do.

Lady Ada Lovelace, the world’s first programmer.

Turing has an extremely potent counterargument to this. He states that the premise P2 is false. Humans having originality and producing novel things is merely an illusion. Everything that humans have created or discovered had to be based on certain previous work. In other words, there is no human creation that is completely original and devoid of any dependence to any previous human invention, knowledge or thought.

A variant of Lovelace’s objection is as follows.

P: Machines can’t take us by surprise.
P: Humans can take us by surprise.
C: Machines do not think the same way as humans do.

Here, Turing argues that machines have taken him by surprise and therefore, the first premise must be necessarily false. However, he is vigilant in acknowledging the shortcomings of his argument when he says that the surprise might be rooted in the creative mental act on his part and not of the machine.

I think the second variant of Lovelace’s objection is particularly effective and cannot be calmed by the arguments Turing provides. The potency of the argument lies in the fact that there are some facets to human thinking and behavior that are random (or appear random). On the other hand, it is impossible to program randomness into a computer. Therefore, it is impossible for a computer to take its creator or programmer by surprise. By extension, anyone who knows how the computer is programmed is unlikely to be taken by surprise.

Argument from continuity in the Nervous System

P1: The nervous system is a continuous state machine.
P2: Digital systems are discrete state machines.
P3: Discrete-state machines work differently from continuous-state machines.
C: No digital computer would be able to pass the Turing test.

Turing admits that the premises are true but he argues that the argument is not valid. That is, it is possible for a computer to pass the Turing state even though it is inherently a different kind of machine from the nervous system.

Turing refutes the argument by giving the example of a differential analyzer, which is a continuous state machine. He goes on to show that it is possible that P1, P2, and P3 are true but C is false for a differential analyzer and a digital computer, thus rendering the argument invalid.

P: The digital analyzer is a continuous state machine.
P: The digital computer is a discrete state machine.
P: An interrogator will find it difficult to distinguish the two.
C: It is possible that the interrogator will find it difficult to distinguish another set of machines, namely the mind (continuous) and the digital computer (discrete).

Extrasensory perception

Let us suppose that extra sensory perceptions such as telepathy and clairvoyance exist.

P1: The interrogator can communicate telepathically with the human.
P2: The interrogator cannot communicate telepathically with the machine.
P3: In a guessing game, the human will provide better answers than the machine, on account of receiving answers through telepathy.
P4: The interrogator will be able to distinguish the man from machine.
C: No machine will pass the Turing test.

This argument is perhaps the weakest of all. It fails if it can be proved beyond doubt that the extrasensory perceptions in question do not exist. Additionally, it may be possible for the machine to pick up telekinetic signals from the interrogator and not be at a disadvantage anymore.

To overcome this objection, Turing gives a very simple solution: the test must be conducted in a place/room that is telepathy proof. Adding this extra condition to the test renders this entire objection moot.

Conclusion

For the sake of brevity, this paper has not covered the Informality of Behavior argument as it has already been addressed, at least in part, in the other sections. Some of the objections stem from the objector’s perception of what thinking means (and therefore, the validity of the test) whereas others argue that under certain conditions, the test can never be passed.

In most cases, Turing does an excellent job of refuting the objections. However, Turing’s test and the ability of machines to think will be viewed with skepticism until we obtain a complete understanding of the workings of the human mind. Furthermore, there is inherent randomness (or perceived randomness) in human behavior that machines might not be able to replicate completely. Therefore, the ability of machines thinking, contrary to what Turing might have believed, still remains an open question.

Bibliography

  1. Turing, A. (1988). Computing Machinery And Intelligence. Readings in Cognitive Science, 6–19. doi:10.1016/b978–1–4832–1446–7.50006–6
  2. Fermi Paradox. (n.d.). SpringerReference. doi:10.1007/springerreference_222373

--

--