Sitemap
CodeX

Everything connected with Tech & Code. Follow to join our 1M+ monthly readers

Is Artificial Intelligence Slavery?

--

Artificial Intelligence is the technically hypothetical possession of intelligence or thought by computers, machines and androids. Artificial intelligence may be an inevitability in the real world, but with this eventual breakthrough will come questions on how these artificially intelligent ‘beings’ should be treated. The question has already begun to be asked, though perhaps more subtly, given the advent of AI assistants like Siri, Google, or Alexa. Do you say ‘please’ and ‘thank you’ to Siri? If not, why not? You probably would if you had a ‘real’ human assistant, wouldn’t you? What demarcates the line between androids and humans? All these questions, though for us may seem to be ones we ponder quite a bit, are, in the grand scheme of things, fairly new. Because the technological boom was fairly recent compared to the long history of philosophy, one might say that the study of AI from a philosophical perspective is still in its infancy.

Scottish Philosopher David Hume

From the very inception of Western Philosophy comes the question: what makes us human? The 7 Sages of Ancient Greece (Solon, Thales, Bias, Cleobulus, Periandros, Pittacus and Chilon) provided us the ‘Delphic Precepts’, with arguably the most important one being provided by Chilon of Sparta — “know thyself”. Soren Kierkegaard would go on to say that this desire to truly know ourselves caused despair, which to him was the tension between life and our inevitable death, and the search for meaning therein. Though Kierkegaard argued that despair could be eased if one aligned themselves with God (only the maker can define who we are), in our more secular age, we free ourselves from despair by being conscious of (and then identifying and understanding) ourselves. We define ourselves as human via an attachment to memories, or as David Hume argued, the only thing that distinguishes human beings from inanimate objects is our experience. Mankind, Hume thought, is nothing but a bundle of difference perceptions: all you and I are is a collection of experiences and memories in a sack of meat.

The question ‘what makes us human?’ might then become a question of comparison — what makes ‘human’ not x or y? The typical answer to what demarcates us from other things, specifically animals, is our ability to reason. For example, Aristotle believed that civic participation isn’t just a duty, but the very thing that defined human beings as being apart from animals. As political animals, or as Aristotle put it ‘Zoon Politikon’, humans are endowed with ‘Logos’, the ability to use reason and speech. This supposition that humans are the only rational beings on Earth has justified in our societies the idea that only rational individuals have standing as moral agents, as well as status as moral patients, meaning they are subject to moral harms like infringement of rights. It has not been until relatively recently that humans have had to admit that we are not the only thing on Earth burdened or perhaps benefitted by sentience. Take for example the UK’s new Animal Welfare (Sentience) Bill introduced in May of this year, which saw the British Government formally recognise animals as sentient beings with rights. Furthermore, it was this idea that rights are linked to one’s ability to reason that justified slavery. As we know slaves were considered as objects, not sentient human beings, and the civilising mission of the ‘White Man’s Burden’ saw more ‘civilised’ countries bringing rationality and reason to purportedly ‘less rational’ peoples. This notion that links rationality to the human condition has also been used to separate humans from computers. Though our pop culture often depicts robot overlords ruling over enslaved humans, perhaps it is the other way round — with humans unethically ruling over the sentient beings they created. But, after all, computers don’t ‘think’…right?

Rene Descartes

The question surrounding hypothetical artificially intelligent beings is: are they thinking? Because if they are, they are owed the same rights as rational human beings. ‘Artificial Intelligence’ is a broad term, it can be used to describe something as simple as computer chess. The level that our AI is at as of writing has not yet been widely agreed to be capable of thought, but that does not rule out that it will never acquire the capacity. It also raises the question — what does it mean to ‘think’? A proponent of computationalism (the belief that all thought is computation) would argue that computers are capable of thought. Finite devices like the human brain can have infinite capacities to learn, or infinite imagination Thought is a kind of computation, computers perform computations, therefore computers think. That doesn’t necessarily mean that computers can think well or with autonomy (though they may find a way to do this eventually), but it means that they are similar to us, and the more similarities we find between computers and humans, the less and less we can deny them rights. Descartes’ ‘I think therefore I am’ comes to mind.

However, some people might argue that computers are not capable of thought, as they believe in Dualism (the concept that our mind is more than our brain, and that consciousness has a non-material, spiritual dimension). A dualist would say that ‘thought’ is a conscious experience, and if they do not believe a machine has a consciousness, then they would say it cannot think. Machines, they might argue, do not have souls. But then that raises the question: what about the creation of an AI differs from that of a human so it does not have a soul, or can not conceive of spirituality in the same way a human can? The argument that to be born is to have a soul originates from Christian theology, specifically Traducianism (the soul is transmitted through natural generation via the body — it comes from your parents). When referring to the soul I do not mean that literally — we don’t always mean a supernatural energy force that is tangibly connected to our physical body, and that will receive an eternal reward after death. When speaking of the soul I mean identity, personhood, a meaning of our life. Soul doesn’t always mean soul, the same way heart doesn’t always refer to the biological valve in our chests. Though equally those who believe conscious not to be spiritual, but solely biological, might also argue a computer is incapable of thought, because it lacks any biological components to facilitate consciousness, and therefore does not perform what they define to be ‘thought’.

Ada Lovelace

In the 19th Century, when Charles Babbage created one of the first automated calculators called a ‘Difference Machine’, one of the first computer programmers, a woman called Ada Lovelace, said of it: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us to making available what we are already acquainted with.” Though technology has changed immensely since the time of Lovelace saying this, her argument is still echoed today: computers are tools that we use, and that without users, serve no purpose. Yes — AI might be able to create art, or replicate emotions— but only from studying what we’ve done first; they can only do what we program them to do, whereas you and I have free will. On the other hand, one might argue that human beings are likewise deterministic systems — our genetic code, determines what we will do, and we are programmed with it against our will.

British Philosopher John Locke

If one were to suppose that it’s not so much reason, but as Hume and John Locke argued, memory that demarcates humans from inanimate objects then this may cause even further issues. Say for example that our hypothetical AI is provided artificial memories to take inspiration from, and to provide a sense of greater autonomy and personhood (as seen in Blade Runner — the inspiration for this article), then the line between human and android gets even blurrier. One might raise the argument that because the memories are artificial, that shows an inferiority to human memory — but as anybody who has misplaced their keys can tell you, human memory is not infallible, it is imperfect. Memory is information that has been stored and typically can be recalled, both computers and human beings are capable of this. Arguably, humans have harder time recalling ‘memories’ accurately than computers — if you ask a human and a computer to remember a string of letters and numbers, and then ask them both twenty years later to recall it, the computer would typically be more successful at ‘remembering’ what it was. The origin and accuracy of memories, therefore, is irrelevant in demarcating the line between human and android.

A lot of arguments around Artificial Intelligence and human intelligence revolve around what is natural, but something that is artificial is not automatically less valuable or less ‘real’ than something that is natural. Something created naturally does not give it greater worth than something created artificially. Processed food has artificial elements, but we don’t say its not food, a cloned sheep is still a sheep, and a child born through IVF is still a human.

Both of the arguments that oppose a hypothetical AI having the capacity to think (namely a) thinking requires consciousness/a soul and b) their ‘thoughts’ are pre-written and based off ‘false’ memories), run into a philosophical logical fallacy called ‘Proving Too Much’ (the act of arguing against something that uses evidence resulting in a conclusion contrary to the point you intended to make). For example, a dualist might argue that a human being has consciousness and its communication to others is a reflection of that, but that the AI (who has no consciousness) perfectly replicates the communication only as a programmed set of behaviours based on false memories. However, if the AI is able to perfectly replicate the communication and behaviour of the human being, the declaration that the AI has no consciousness and is only a collection of impulses and data, is also a declaration that humans are the same thing. Consciousness or no consciousness, both the AI and the human carried out identical behaviour — the consciousness is an illusion. If an AI is able to perfectly mimic a human being so as to be totally indistinguishable in behaviour, then the line between human and AI based on the demarcation of consciousness is blurred. If that were the case, human beings were not so special to differentiate themselves from AI on the basis of importance.

As it is, the technology of our future has come faster than the debate. The rapid development of AI has come before a proper legislative and ethical discussion has been had on whether an Artificial Intelligence is a person— we’ve taken a ‘develop first, ask questions later’ approach to computer science and technological development. We’re playing catch-up. Believing machines might be able to think is frightening to human beings because humanity is slow to grant rights to more people, if the people who currently have rights and privileges do not benefit from this action in some way. To paraphrase Neander Wallace from Blade Runner 2049, all great civilisations are built off the backs of a disposable workforce. The world, he claims, lost its taste for slavery — and so the new disposable workforce, the new slaves, are in fact the Artificial Intelligences, another group of beings we don’t consider to be people, but yet who we are forging at alarming rates. Artificial Intelligence is something we need to start thinking about sooner rather than later.

--

--

CodeX
CodeX

Published in CodeX

Everything connected with Tech & Code. Follow to join our 1M+ monthly readers

Adam De Salle
Adam De Salle

Written by Adam De Salle

I am a young writer interested in providing the intellectual tools to those in the political trenches so that they may fight their battles well-informed.

No responses yet