Super Human or Less Human?

“What mind set will these early stage A.I.’s have… we’re teaching them to manipulate people and we’re rewarding them for doing it successfully.”

We’re swiftly approaching a fork in the road like no other we’ve experienced previously, something that terrifies half as many people as it thrills. We’re on on the cusp of a completely new direction for the human race to embark upon, one that redefines the very essence of what it means to be human.

Experts in the field of artificial intelligence and general technologists are marginally divided as to when we’ll actually arrive this event. Excited guesses say that we’re as close as five to ten years away, while conservative estimates have things pegged for a handful of decades down the line, but all of them anticipate that it is a real inevitability — that humankind and technology will one day converge.

There are generally two common avenues of idealizing this convergence: that of integrating technology into our minds and that of uploading our consciousness into technology. This post will focus on the former, as it seems to be the anticipated next step for us to take, one that has already materialized in many basic forms that are impressive today but will be considered primitive tomorrow.

We have people like Neil Harbisson, who have began to toe the line between human and machine. Harbisson, who has an antenna implanted in his skull that sends auditory vibrations to his brain allowing him to, as he considers it, hear colors, is legally recognized by government as a cyborg. Despite this, we’re moving far and away from (though certainly not leaving behind) the idea of technology as a prosthetic device, towards a goal that further closes the ever-shrinking gaps of information exchange to a point of absolute immediacy. I’ve previously written about the idea of achieving such immediacy of information transfer in a previous post concerning the conceptualization of a noösphere. It’s worth noting that this is taking things to the utmost exemplification of that concept.

“It is difficult to appreciate just how far artificial intelligence has advanced and how far it is advancing because we have a double exponential at work. We have an exponential increase in hardware capability and an exponential increase in software talent that is going to A.I… It’s going to be a really big deal and it’s going to come on like a tidal wave.” — Elon Musk, speaking on Neuralink

Readers may already be well aware of Musk’s ambitions with Neuralink, a discreet (if not merely purposely vague) project that he has been a part of for the last few years. The company is described as pursuing the development of implantable brain-computer interfaces, likely for the purpose of combating or circumventing brain degeneration. Coyly teasing us with only hints of what he and others are aiming to achieve, there’s no reason to discount the tremendous steps that will be taken over the course of the next few decades in terms of welcoming technology into our minds as we have our homes.

“As a technologist, I see how AI and the fourth industrial revolution will impact every aspect of people’s lives.” — Fei-Fei Li

There are real and substantial questions to be asked and there may be no better time to ask them than now, before we arrive at this surreal junction. Questions that involve the very essence of what it means to be human. As we’ve seen, the rate at which technology advances is beyond words to describe, and we’ll surely be at the doorstep of this inevitable occurrence before long. Therefore, it may bode well for us to address these concerns before this technology is served to the insatiable feasts of consumerism.

“It’s not artificial intelligence I’m worried about, it’s human stupidity.” — Neil Jacobstein

Despite the infinitude of questions to be asked, and for the sake of brevity, this post will delve into only a select three. Ideally, these sorts of queries will be cast onto the desks of policy holders, theorists, manufacturers, visionaries, and modern philosophers, lest we risk finding ourselves in some melancholic future that resembles the darkest episodes of Black Mirror.

It’s also worth noting that the below questions are asked under a contextual framework of artificial intelligence as it relates to our integration with it, rather than the idea of artificial intelligence achieving self-awareness.

Question one: Is there a potential for the creation of a super class that will, permanently and drastically, ensure the subordination of the lower class?

“As long as poverty, injustice and gross inequality persist in our world, none of us can truly rest.” — Nelson Mandela

History would surely throw out a hard yes to this question. Since we’ve been able to keep records that detail our civilization, slavery has existed, and one effective way it has been made possible is through limiting access to information. The church had done this better than even they knew they could do it throughout the centuries — it’s part of the social-historical nature. Today, more subtle forms of slavery still exist, either hidden from sight on an individual level (i.e. human trafficking) or hidden in the form of formalized leverage and exploitation on a larger scale (i.e. underpaid and overworked factory workers). If we get a bad taste from the disparity between class systems now, what would we expect from brain-computer interfaces peppered throughout the social class frameworks.

So it’s really not hard to envision a system whereby the top strata’s of humanity, who are able to afford this integration, could have an indescribable advantage over those who don’t. Without going to deep into the rabbit hole, we can hypothesize that they would be able to download applications into their cerebrums that would allow them to speak any language fluently, calculate any mathematical formula effortlessly, win any argument, master social and psychological manipulation, achieve maximum mental health and capabilities, self-diagnose illnesses, manipulate time or a range of other unimaginable abilities.

Where does that leave those who choose not to, or simply cannot afford to, integrate themselves in a similar way?

Question two: Is this integration a departure from what it really means to be human?

“Our dream is to one day uncover the essence of what makes us human.” — Paul Allen

The argument can be made, and has been made, that we’re already there. Largely in part because of portable computing, we have the ability to access unlimited information. The drawbacks are that a) technology cannot reciprocate with us as efficiently and b) there is a delay in the time it takes us to acquire the information we seek; we have to type a question into a search engine, sift through the results, and finally absorb the information. Proponents will argue that this impending convergence is simply speeding up the process.

At what point does the essence of what it means to be human become jeopardized by the increasing integration with technology? A prosthetic device — say, a mechanical hand — is one thing, but what about the mind? Do we lose our humanness the second we install an operating system of some sort into our neural network? Undoubtedly, there will be many who will avoid this integration because they will feel it is unnatural, inorganic.

We can go deeper: at what point does thought, thought that is aided by artificial intelligence, become inorganic? To what extent will genuine emotion be influenced by synthetic modes of thinking? Say we can install a decision-making application into our neurological function, one that we can tune to be more rational or more impassioned — this can be considered a drift from genuine autonomy.

Will there be such a mass produced commonality when it comes to artificial intelligence that threatens our individuality?

Question three: To what extent will this integration be susceptible to external manipulation that proves detrimental to us?

“I saw a subliminal advertising executive, but only for a second.” — Steven Wright

Dr. Ben Goertzel, appearing on an episode the Joe Rogan Experience, brought up this point, as he stated:

“The real life pattern of human values gets inculcated into the intellectual DNA of the A.I. systems, and this is part of what worries me about the way the A.I. field is going at this moment… Most of the really powerful neural A.I.’s on the planet are involved with selling people stuff they don’t need, spying on people or figuring out who should be killed or otherwise abused by some government. So if the early stage A.I.’s that we build turn into general intelligences gradually, and these general intelligences are spy agents and advertising agents, then what mind set will these early stage A.I.’s have… we’re teaching them to manipulate people and we’re rewarding them for doing it successfully.”

I needn’t go too much further with this self-evident point. Though it is a terrifying one when we stop to consider the potential for viruses, or the likelihood of covert marketing campaigns that target us in our sleep or seed themselves in the depths of our subconscious. Subliminal messaging will have the potential to go to dark new places and I would go as far as to predict that it will be one of the controversial topics riding the coat tails of this convergence.

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” — Eliezer Yudkowsky

To ask these questions is unintentionally cast a dark shadow over what could be the most monumental of human achievements, and it should be underscored that the benefits of this impending convergence are unfathomable to the point that they may drastically outweigh the potential cons and concerns. In essence, we don’t know what to fully expect until we get there. But this doesn’t mean that we shouldn’t stop for a moment to at least ponder these types of questions, and I have complete faith that we will, as we are nevertheless human.