“What is going to be created will literally be a god,” Anthony Levandowski, the leader of a religion called “Way of the Future” told Wired Magazine. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”
Levandowski considers himself a prophet of artificial intelligence (AI). His newly-founded cult anticipates the presumed god-like power of technology. This case of techno-idolatry may be extreme, but if they didn’t believe it was possible now, most people would believe that artificial intelligence will arrive in the near future.
The popular understanding is that AI will be able to think like a human and interact with human beings in a way that might be indistinguishable from a real human being.
This level of sophistication is most often described as “Strong AI”. The technological development that computers will someday match our cognitive abilities — and even become self-aware — already has its own mythology.
The coming of Strong AI has been labelled as “the singularity”, an event of unprecedented magnitude for the human race. Some see it as the end of humanity, some see it as a new beginning.
The basic benchmark for this kind of “Strong AI” is the Turing Test, developed by Alan Turing. The test is to see if a human being would mistakenly take a machine for the human being by swapping messages with it, effectively having a conversation.
But can machines think? Can intelligence actually be artificial? What even is “intelligence”?
These are questions that preoccupied the twentieth-century philosopher Ludwig Wittgenstein, who thought about AI some years before the Turing Test was proposed.
Strong AI rests on a theory of the mind that holds that the mind is the brain, and the brain an information processing machine. This is known as the “computational theory of mind.”
But Wittgenstein went on to become arguably the most significant philosopher of the twentieth century by demonstrating just how profoundly we are enmeshed in the web of language, and the problems that poses for some of the most fundamental aspects of being human, including the mind, consciousness and intelligence.
These are all aspects of being human that the “computational theory of mind” might not be adequate to explain. The idea of Strong Artificial Intelligence is off the mark, because it’s nothing like intelligence.
Wittgenstein was born in Vienna in April 1889 when the Austian city was the cultural and financial centre of the Austro-Hungarian Empire. His father Karl Wittgenstein was an industrialist and one of the richest men in Europe, controlling a monopoly on steel production in the Empire.
The family home was described as the “Wittgenstein Palace”, a lavish mansion that also served as a gathering place for the intellectual and artistic associates of Ludwig’s parents. Brahms and Mahler gave concerts at the home, and Auguste Rodin and Gustav Klimt were commissioned to create works of art for the family.
Despite the immense wealth and luxury, the Wittgenstein family was unhappy. Karl had all nine children taught at home. He was a harsh perfectionist who wanted his five sons to take senior roles in his business empire. Three of them would commit suicide, two when Ludwig was just a boy. Ludwig escaped into higher education, while his only brother to survive — Paul — became a world-famous concert pianist.
Ludwig initially trained as an engineer, but his affection for mathematics brought him to Cambridge University to study mathematics and logic under the philosopher Bertrand Russell in 1911. Russell had co-authored Principia Mathematica, a landmark work on the foundations of mathematics.
Wittgenstein was precocious. He built a rapport with Russell and the older philosophers at Cambridge in his first year there. His thinking very quickly eclipsed the work of his mentor.
Russell wrote to his mistress, Lady Ottoline Morrell, that Wittgenstein’s critique of his work was an “event of first-rate importance in my life, and affected everything I have done since. I saw that he was right, and I saw that I could not hope ever again to do fundamental work in philosophy.”
Wittgenstein developed his own ideas about language and logic in the early 1910s. He had considered language as being a cohesive representation of the world, where words were referents to things and sentences were the statements of facts that were either true or false.
To emphasise the point, he would write: “The limits of my language are the limits of my world.”
This was an idea partly inspired by a court case in which tabletop models were used to recreate an event. The theory is often called the “picture theory of meaning” since a sentence, with its combination of “atomic facts”, purportedly represents a state of affairs like a picture.
This is the theory Wittgenstein proposed in his ground-breaking Tractatus Logico-Philosophicus (1921), written in the trenches during the First World War (he fought against his Cambridge peers with the Austro-Hungarian Empire).
The book is not only a landmark in twentieth-century philosophy but also an extraordinarily austere piece of mystical literature — it has a structure and transparency like clarified ice.
In its icy beauty, the book resembles Spinoza’s seventeenth-century philosophical masterpiece The Ethics (the book’s Latin title is an homage to Spinoza’s Tractatus Theologico-Politicus), but also Tolstoy’s The Gospel in Brief, which Wittgenstein read obsessively while serving on the front line as an artillery scout.
The “Tractatus” consists of seven numbered propositions with decimal subpropositions to elaborate. 2.1 would elaborate 2, 2.1.1 would elaborate 2.1 and so on. Only the last enigmatic proposition does not have supporting elaborations: “What we cannot speak about, we must pass over in silence.”
Wittgenstein believed that Tractatus Logico-Philosophicus had solved the problems of philosophy. He gave up the profession and became a mathematics teacher in rural Austrian schools, a job he eventually loathed and was dismissed from for his fierce temper.
But Wittgenstein soon realised that human language is not a system of reference. When an angry cyclist gave him the finger in the street, he realised the mistake he had made in his theory. What’s the “fact” behind giving somebody the finger, or offering a high-five? Or winking?
Instead, Wittgenstein started to conceive of language not as a unitary and cohesive representation of the world, but as an infinite number of game-like activities with no unifying essence. He used the term “language games” to define them.
Giving somebody a rude hand gesture, winking, asking for something, giving an order, counting things or lowering a flag to half-mast are ways of doing language in the infinite ways possible. None of these acts are the statements of facts that form the basis of the picture theory of meaning.
The picture theory of meaning holds that language is a mirror of the world, composed of atomic facts that have their correlates in the real world. This correlation was supposedly the essence of language.
Wittgenstein began to understand that language was more like a collection of activities in the world. Yet all these activities can be recognised as language by us despite their differences.
This is why games are the perfect analogy: games are infinitely variable. There are a whole spectrum of activities that we could call games, from video games to word games to ball games.
While one game could have nothing in common with another we still identify both as games. Solitaire has barely anything in common with baseball, a video game like Fortnite has nothing in common with playing fetch with a dog, but we know they’re games.
Why? Wittgenstein argues it’s because they have a family resemblance: there is no one common feature to all games, some games may have nothing in common, but they are connected by a pool of attributes — not one attribute — that make up the sum of games.
The overlapping features are just like family resemblances. You may not have your father’s brown eyes like your sister does, but you may have his curly hair like she doesn’t.
A game of “solitaire” involves cards just like Top Trumps, and Top Trumps has scores like baseball, and baseball is about defeating an opponent like kickboxing, which involves fighting just like Fortnite does. Solitaire and Fortnite have only one thing in common in this respect: they are games, they are part of a family of activities. One thing is sure — games are certainly not connected by one common essence.
“Language games”, Wittgenstein holds, are similar, and importantly, just like games, they can be and are made up on the spot. In the Blue Book — a series of notes taken from his lectures in the 1930s, Wittgenstein stated:
“[I]n general we don’t use language according to strict rules — it hasn’t been taught us by means of strict rules, either.”
Language is a matter of a shared horizon of experience. Doing language consists not merely understanding and following rules, but also shaping them in the act of participation. Language requires intuition in the interpretation of rules with a mix of prescription and new precedent. Doing language is like playing a game where the players consent to make up the rules as they go along.
Forms of Life
Language has this game-like fluidity because it is embedded in the human “form of life”.
“If a lion could speak,” Wittgenstein famously stated, “we would not understand it.”
Why? Because a lion is embedded in a different “form of life”: the lion’s form of life. If language was a system of reference with its own essence, we would understand the lion. But language serves (and in turn shapes) the practical needs of the life form from which it springs.
Even if the lion could speak, the way it understands the world would be so inconceivably different from our own as a species that we wouldn’t understand it.
The same would go for a computer. Wittgenstein wondered on paper if machines could ever think. He came to the conclusion that they couldn’t. One such reason is that machines couldn’t possibly share the human “life form” that’s required for a shared horizon of meaning.
Here’s another one of Wittgenstein’s aphorisms (from Philosophical Investigations):
“Understanding a sentence means understanding a language.”
The problem for AI is that language is more than a sum of its parts. The point Wittgenstein makes here is that a system may parse through words and process them as a sentence, but it would not really understand the sentence as part of human language.
John Searle, a younger philosopher working in the language-centric tradition established by Wittgenstein’s innovations, used the now-famous “Chinese Room” thought experiment to demonstrate that while AI could follow rules, it wouldn’t be cognizant of them.
A non-Chinese speaker in a room with instructions on how to read and write Chinese characters would be able to converse with Chinese speakers outside by exchanging messages with them. He could convince them that he can understand Chinese by following the rules to write his responses. But the man is not really understanding the language, he is simulating understanding.
Language needs interlocutors who could be cognizant of the changing rules of the game. A rule-following machine would simply not keep up. This is not a matter of complexity that technology will one day catch up with, it’s a matter of language being organic to our form of life and thus out of reach of any computation.
Computational power may well catch up with the human brain, but it’s not the brain that is behind human intelligence. Human intelligence springs from the language that connects our brains.
Language and Inner Sensations
This is the ground-breaking appeal of Wittgenstein. Before Wittgenstein, it was widely understood in philosophy that intelligence was internal to the human mind. In the seventeenth century, Rene Descartes came up with a formula that has stuck: “I think therefore I am” (often rendered in Latin as “Cogito, ergo sum”).
Descartes’ idea that thinking is self-sufficient and happens in the mind dominated for hundreds of years.
Wittgenstein posed the question as to whether or not somebody could have a private language that only they understood to describe their inner sensations (such as pain) to themselves.
He reasoned that they could not. Our understanding, even of ourselves, is publicly mediated through language. An inner sensation demands outward criteria to be meaningful, even privately. Wittgenstein remarked that you learn the concept ‘pain’ when you learn language.
A machine can “think” in so much that electronic signal can flow through its circuits, it can make calculations based on inputs. But can a machine understand itself in the same way a human being can?
This is not to say that we will not be conversing with machines. We already talk to machines like Apple’s Siri and Amazon’s Alexa, of course. The point is that machines will not pass the Turing Test if pushed beyond any kind of language limited to formalised rules.
Comparing even “Strong AI” to human intelligence is like comparing an aeroplane with a bird. Sure, the aeroplane will get in the sky, but it will never move through it with the fluid dexterity of the bird. The bird’s dexterity in the air is intrinsic to its form of life. The machine has no form of life, it has a purpose instead.
Computers performing more human tasks (and speaking to humans) will create a lot of value and hopefully make our lives easier. But the idea that computers can be intelligent in the same way human beings are is disproved through philosophical reflection.
Wittgenstein’s later philosophy is less a doctrine than a tool kit of concepts and strategies for clear thinking. His body of work is arguably a deconstructive “anti-philosophy” rather than a constructive philosophy.
Though his work was concerned with “showing the fly out of the fly bottle”, he believed his work had moral purpose. He once stated that “bad philosophers are like slum landlords”, and saw it as his duty to “put them out of business.”
“Bad philosophers” need not be professionals. There are many kinds of thought leaders who perpetuate myths and misunderstandings about the world and life within it. Wittgenstein’s particular ire was reserved for an arrogant belief that science can explain everything.
Wittgenstein wrote in the Tractatus:
“The whole modern conception of the world is founded on the illusion that the so-called laws of nature are the explanations of natural phenomena. Thus people today stop at the laws of nature, treating them as something inviolable, just as God and Fate were treated in past ages. […] the view of the ancients is clearer insofar as they have an acknowledged terminus, while the modern system tries to make it look as if everything were explained.”
The flipside of the belief that AI will someday equal or surpass human intelligence is that the human mind is mechanistic or “computational”. Wittgenstein’s work dispels this idea, to understand that is like waking from a bad dream.
Wittgenstein’s tool kit of concepts — such as language games — help us dismantle the bad habits of thinking. These bad habits form beliefs by which we live inauthentically in the present and create a limited future.
When extravagant claims are made that AI can replace human beings, it cheapens humanity and obfuscates a clear understanding of ourselves. Wittgenstein’s ideas help us to see that AI is a confused fantasy. His anti-philosophy helps protect us from ourselves.
Thank you for reading.