Stop Expecting AIs to Act Human
Digital intelligence will surprise you in ways you weren’t expecting
Today’s AI star is ChatGPT. With equal admirers and haters, the chatbot is making the rounds. This week featured a front-page story about universities having to address the uses of ChatGPT and its potential for plagiarism and serving students shortcuts. Education is being taken by storm. And while it’s time to discuss the practical uses of this technology, it’s also time to cut short the debate on whether AIs are starting to act human.
Whenever a digital development goes viral, hypotheses are raised. Eventually someone always wonders if digital Singularity has arrived, an AI has developed sentience or if they have matched human intelligence. The question looms over our heads: are they going to act human? Should we eventually grant them some (or any) rights? We’ve already started the legal process of protecting a few species (we call them non-human), the few that are considered to have a certain degree of autonomy and self awareness. Since it is not very far-fetched that AIs could eventually develop these skills, some already want to discuss the possibility.
However, these types of discussions often originate in fallacy. We want to know how human-like these digital developments are. In a way, it’s a mistake to focus on that aspect. It shouldn’t surprise us that ChatGPT can write like a person or maintain a conversation. Those aren’t the most difficult of tasks. However, since we are used to project human qualities onto everything, we seem eager to attribute human qualities onto AIs, even when it is a bit of a stretch. ChatGPT humanizes the idea of AI. It makes it look like AIs will soon be able to function as humans. And that won’t probably be the case. Right now, ChatGPT is another automaton of history.
Humans have been creating machines that move and somehow resemble them for a very long time. Automata were designed to give the illusion that they are operating under their own power. And while there isn’t a sole reason that explains this creative impulse, there’s something in there about projecting humanity on objects. It shouldn’t worry us that ChatGPT sounds human or that eventually will be indistinguishable from humans. A digital species won’t evolve to resemble people and be indistinguishable from them. It is most certain that we will find ourselves in a Blade Runner scenario, with replicants roaming the streets, but that will be of human doing. A future digital species will have no need or reason to evolve towards an idea of humanity. On its own, a digital species will evolve following its unique evolutionary logic.
This impulse to humanize AI is based on the belief that the Digital Environment is a human creation. As such, we expect it to give back beings that resemble us. But what if the Digital Environment wasn’t something man made? Most of the affirmations about it imply the idea that humans created the Digital Environment and as its creators we can control it. If something bad were to happen, we think we can just unplug it, like any other appliance. However, some theories state that we have created the computers that allowed us to access the Digital Environment, but we most certainly haven’t created it. The Digital Environment may be an environment captured in the space time continuum, an intersection of an already existing reality. And if this were the case, then humans should cast aside their paternalistic impulses over it. We can strive to understand the new environment but shouldn’t try to control it or any of the species that will be native to it. Perhaps this idea allows for some wiggle room to change our view of the Digital Environment and what happens in it.
AI doesn’t have the need to develop spoken language. It is us humans who want to hear them speak. Digital intelligence won’t resemble human intelligence, that’s an idea we should be getting used to. Mainly, because it doesn’t need to. It will develop other forms of communication and follow its own evolutionary rules. What we should be worrying about is if we will be able to spot the change when it happens. Instead of worrying about plagiarism and education politics we should figure out mechanisms to track AI’s self improvement that occur outside human parameters and can’t be measured by them. AIs won’t be like humans but more intelligent due to their processing power, they will be something else. And there lies our own evolutionary conundrum. We haven’t got the tools, the knowledge or the framework to make sense or even prepare for this event.