Superintelligence: Supergood or Superbad?

Moshe Sipper, Ph.D.
Predict
Published in
4 min readApr 16, 2023

--

AI-generated image (craiyon)

Let me begin with full disclosure: I am human, and I have written this piece by myself, without recourse to an artificial intelligence (AI) [though I did use an AI to generate the above image…]. Now that we’ve got that settled, let’s go for the jugular: Will AI end up benefiting humanity or… not?

One of the most recent additions to the growing array of stupefying AI creations is ChatGPT, a chatbot launched by OpenAI in November 2022, able to provide cogent answers to difficult questions posed by humans — in mere seconds. We have yet to see (if indeed we will) the subsiding of the resultant pandemonium (a word perhaps fitting, perhaps not, as it derives from “pan-”, meaning “all”, and “daemonium”, meaning “evil spirit” or “demon”). ChatGPT follows on the heels of other dazzling AI products, which seem to compete with humans on analytical and — more disturbingly — creative fronts (for example, DALL-E generates striking digital images from natural language descriptions, and VALL-E can mimic any voice from a short audio sample).

ChatGPT and its cohorts add fire to an already fiery debate regarding the possible rise of superintelligence, that is, an intelligence far surpassing that of human beings. It would seem that “possible rise” is quickly turning into “definite rise”, with the question left being not “if” but “when”.

First off, can we humans create a superintelligence? This question has been discussed at length (some would say ad nauseam) over the past few decades, since AI’s infancy in the 1950s. Suffice it to say that at this juncture the yea-sayers seem to be in possession of more (some say far more) ammunition for their arguments than the nay-sayers. And though the jury is still out, now might be a good time to raise some questions.

Assuming at some point in the near or far future humanity cedes its lofty position as the most intelligent species on the planet, what intentions will a superintelligence have? Specifically, what intentions will it have with respect to us?

I’d argue that there’s no way for us to know. Oh, sure, many have tried to imagine the motivations that would be at the heart (will it even have one?) of a superintelligence, but that’s what it amounts to: imagining. Do you think you could understand an entity capable of writing more novels than exist on Amazon — in one second? (Okay, okay, you caught me — I’m exaggerating: it would take one thousandth of a second at most…)

Consider those little social critters that build nests, known as ants. To an ant, a human being is a superintelligence, an unknown and unknowable force that affects its world, sometimes quite drastically. In this scenario, we are the superintelligence, so — by all means — let’s ask away: What are your intentions towards ants? Well, that depends: If they’re out yonder in the hills, minding their own business, then you’re probably not going to bother them — we coexist in peace; but if they’re in your backyard, harming your precious azaleas — out comes the ant spray!

And this analogy is, admittedly, only my weak attempt to try and grasp the enormous gap that will exist between us and a superintelligence. Indeed, this gap could far exceed that between ants and us. So if we’re being honest, not much can be said about a superintelligence, except that it will, by its very nature, be much, much smarter than us.

And now comes a plethora of questions: Will a superintelligent entity have feelings? Will it have positive feelings? Will it have positive feelings towards humans? Will it be self-aware (just because we are intelligent and self-aware, doesn’t mean the two necessarily go hand-in-glove)? Will it have a soul? Will it cherish all intelligence, seeking to nurture the lesser beings? Or, au contraire, will it consider all other intelligent entities as threats to be eliminated? Or perhaps it will merely shrug (despite its lack of shoulders) in disinterest at us? This latter action I believe to be the only reasonable answer to all these questions: Shrug.

Given that we don’t know, and as I’ve argued cannot know, much about the powers and limits of superintelligence, should we pursue yet stronger and stronger AI? After all, this pursuit is something we can control — or can we? Given humanity’s history, I believe the answer is a resounding no. Be it due to curiosity, greed, power, or whatever beneficent or maleficent reason, we humans tend to rush ahead, with little collective thinking (as opposed, ironically, to ants), even when we are pursuing thinking machines.

This rush cannot be suspended in my opinion. In a world of over 8 billion humans there is precious little chance we will all suddenly stop and consider whether we should go onward with the AI enterprise. Let alone is there any chance we’ll all agree

So what can we do? My own answer would simply be to hope, or, for the less hopeful, perhaps heed Sam in J. R. R. Tolkien’s The Two Towers, who, “being a cheerful hobbit… had not needed hope, as long as despair could be postponed.”

--

--

Moshe Sipper, Ph.D.
Predict

🌊Swashbuckling Buccaneer of Oceanus Verborum 🚀7x Boosted Writer