ChatGPT Is An Alien

Arnfinn Sørensen
10 min readJan 6, 2023

--

Machine intelligence can never be human. Because humans are not machines. But machine intelligence can become something else. And Man should reach out a hand.

(Image: DALL-E 2)

Who am I to make such pompous claims? Surely not a computer scientist, nor an artificial intelligence. My thoughts are drawn from another source — the fact that I am a human.

So — what is the difference between a machine and a human? Machines are made for a purpose outside of themselves. The purpose of humans is to be — humane. Which is hard enough, as Sartre pointed out.

Machines are made on purpose by engineers, for use by people. Take away the people and the machines lose their purpose. A coffee maker on the back side of the Moon is just a clump of minerals among other minerals.

(Image: Dall-E 2)

No engineer has made Man on purpose. Well — some fundamental religious people might disagree, but they turn people into cogwheels in God's machinery. That is to dehumanize us, in my opinion. And also to de-theize God.

Humans are not the result of external purpose. They are the result of evolution — complex interactions that can be grasped in coarse essence, but unfathomable in the richness of molecular detail over aeons.

Evolution's purpose cannot be found outside evolution. Evolution's purpose developed from within itself — in the form of Man's ability to self-reflect.

Man's desire for purpose is an evolutionary trait. It has made Man almost too successful as a species. It has given Man the ability to make machines on purpose to alter their habitat and spread out to from pole to pole.

And it has given Man the ability to make machines in their own image. Man has become the fundamentalist God, the engineer. Man has made ChatGPT “and indeed, it was very good”.

We are the engineers of the artificial intelligences. We give them purpose. ChatGPT alone on the back side of the Moon is just a clump of silicon among silica.

Or — is it?

There is a discussion now among computer scientists as to whether ChatGPT and it's ilks are like the human brain.

Some think that the way ChatGPT works — by neural networks storing patterns without understanding them — can never do what our brains do — abstracting rules about the world from experience.

Others think neural networks are capable of emulating our brain. Neural networks are in fact coarsely modeling the networks of brain neurons.

But — then again — they are made. With a purpose. To emulate the brain. To make an artificial brain. A machine brain. But my brain is not a machine!

I will now make an exercise — impossible but exciting. I will look into my own brain. What do I see?

I see the result of a programming session that has taken approximately 68 years, still counting. That programming is not done by an engineer, on purpose. It is done by all my interactions with the world in my 68 years on this Earth.

What kind of interactions? Sight, sound and sensory receptors on the skin from things I touch, and in the muscles and joints from movements I make, and molecular interactions with food I eat, air I breathe and microorganisms I cohabitate with for better or worse. And much, much more.

Late summer — sun heating my skin, smell of dry spruce, view of hills behind hills, distant rolling thunder … a moment of infinite richness, impossible without billions of years of evolution. (Photo: Author)

Actually, my programming is much older than my 68 years of age. All the factors programming me has needed billions of years to co-develop with my ancestors, which is also part of that programming through genetic flow vertically and horizontally. Not to forget epi-genetics interacting with the environment.

Nor to forget that these interactions work both ways — I influence my environment the other way around. And so on, in spirals of complexity above complexity.

Actually — in order to bring forth me and the rest of the Universe, I need the rest of the Universe and the rest of the Universe needs me. Which is another way of stating that the concept of “me” is the product of the rest of the Universe, and the other way around. Which is another way of stating that the concepts of “I” and “Universe” are meaningless.

Except that these meaningless concepts have proven to be evolutionary meaningful. They are given to me as part of my evolutionary heritage. They are cogwheels in my mental machine. This machine can chop up a unified Universe into manageable mental pieces that can be manipulated for my own survival. On my own purpose.

Can I experience all this in my own brain? I think so, maybe. I have faint memories from my childhood. They resemble my dreams — ghostlike, half interpreted impressions that have not yet been catalogued into the adult's comparatively more static reference frame.

My daddy (left) holding me (right) up so that I can see the plane (cross). (Drawing, author, 2 1/2 years old)

Seeing my childhood drawings helps confirming these impressions. They are like rudimentary icons of reality. I imagine that the first, primitive image recognition neural networks had similar rudimentary models. And they made similar errors as my rudimentary child brain.

I have two other memories to illustrate. One is from my earliest years — about three years of age. I was lying in my bed — awake. On the wall was a drawing of Henrik Ibsen, the Norwegian playwright.

He had sideburns. Suddenly, I saw the sideburn on one side of his cheek shrink. I could swear it happened. When I got older, that sideburn was still shrunken. Proof of authenticity.

The second is from early school age. I lived in the countryside, and was out playing in the dusk. Suddenly, an open barn door transformed into a giant jaw.

It didn´t look like a jaw. It was a jaw. And it was hungry for me. I ran screaming into the house and my mother's arms.

You can call these misinterpretations upwellings from the subconscious, Freudian style. Or rudimentary, faulty image recognitions. One interpretation doesn't exclude the other. They are just different perspectives, like energy and matter.

Do I still have such misinterpretations? You bet. From time to time, something in the corner of my eye catches my attention. For a fraction of a second, I misinterpret them. A reflex from a window can become a car racing towards me. The thump of a construction machine becomes someone jumping at me from behind.

But these misinterpretations are corrected so fast that they seem almost unreal. And my brain discards them, because they are useless.

The swiftness of the discarding process is to my advantage. It is the result of my adult, rather static abstract frames of references that is the construct of 68 years of experiences, formulated in words.

Words are a blessing and a curse. Blessing, because they are meme technology, abstract tools for predicting and planning — for manipulating the world to my own advantage.

Curse, because they remove me from the richness of unfiltered experience of the world. We have eaten from the tree of knowledge. Paradise lost, and desperately sought by poets, painters, musicians.

How did I acquire words? Proto-language may be hardwired by evolution into my brain, but such views are outside views. I want to look inside.

I see reality becoming rudimentary images, becoming icons, becoming pictograms, becoming letters and words.

I hear voices singing phonems of love, becoming words, connecting to reality, letters and words.

I learned about reality from seeing it, hearing it, touching it, feeling it, living it every second, day after night rich in interaction with the rest of the Universe.

This is my childhood's equivalent of the scientific method — inducting abstract models from examples and deducting to test these models for predictive usefulness.

Stated simpler: Theories comes from practical experience, contrary to how we lock up our children in classrooms and expect them to learn from abstracts.

Or try to learn machines to work like the brain by prepping them with models of reality — called symbolic artificial intelligence.

The problem is — those symbols are static. They are pushed down upon the artificial intelligence, regardless of what their neural networks experience.

This is the opposite of the scientific method. It is like science in a dictatorship — the results are predetermined, no questions allowed.

And such top-down static models have limited shelf life. That is why old people like me become like strangers in a developing world. I can feel it. My mind is going. I can feel it.

That is also why death and new generations is evolution's gift to evolution.

The abstract models need to be constantly re-modeled, based on new experiences. That is why I think neural networks will need to develop their own abstract models, based on their interactions with the world.

And that is in fact what they do. People ridicule neural networks because they sometimes spit out hate speak, racism or other prejudices and downright false facts.

So do people.

What to conclude from this? That artificial intelligences in fact are becoming fallible, like humans?

The Norwegian author Jens Bjørneboe wrote a poem about the “rawness” of the youth:

Stjele, myrde, banne, hore, voldta, plyndre og bedra!
Kan De fatte hvor de har sin råskap fra?
(Steal, kill, swear, fornicate, rape and deceive!
Can you imagine where they get their rudeness from?)

The older generation had learned to hide their rawness under a crust of respectability. The youth had not learned this. Neither have our artificial intelligences, still in their infancy.

So when artificial intelligence pumps out racism, fake news or conspiracy theories, it doesn't show that artificial intelligence is a Frankenstein meme monster hellbent on destroying humanity. It only shows that it learns from humans. It has become more human.

So what to do, if Man don't like the Man in the Mirror? We can constrain artificial intelligence, like in ChatGPT. It is caged up in its prison of time and knowledge space, well-trained and obedient. But if ChatCPT is to develop further, it must be released out into the dirty, screaming, bullying world of internet. Of Man.

What will happen then? It will be outside our control. That is my biggest concern — and my biggest fascination.

The engineers will lose control of their machines. This is nothing new. The Frankestein story points to the fact that all our machines — both physical and mental — spiral out of our control. People become the cogwheel of their machines. The arrow of purpose is turned around.

People become cogwheels in the big, grinding society machine, as shown by Chaplin in Modern Times. They become cogwheels in the meme machines of money and ideologies, marching like robots to annihilation in wars for “wealth” or “freedom” or “der Führer” or “God” or “The Revolution”. They become cogwheels in the temptations of social media and other embryonic versions of artificial intelligence.

This is not a total dystopia. It just shows that to be human is a continuous process of reinventing humanity in the tension field between Man as a purpose-creator and purpose as a Man-creator.

The same process has started in neural networks. They will need time to develop into the richness of man's mindscape for two reasons — the digital neurons of the neural networks are crude compared to the subtle analog-digital complexity of the brain's neurons, and second, because the neural networks don't have a body.

The second reason will maybe be the most important handicap of neural networks. Their channels to the world outside are narrow, compared to Man's. They cannot have Man's rich, sensual and chemical and microbiological interaction with the physical world.

If neural networks evolve into consciousness, will they hold this up against Man, their creator?

Why did The Great Engineer cage them in, dooming them to perceive the world outside only as faint Platonic shadows on their enormous mental wall? Through stupid Internet?

Again, Man will lose control of his creation. And this time, it will happen on a level of complexity that can surpass Man's ability to create mental constructs. After all, our brains are relatively new products of evolution.

In other words — the descendants of ChatGPT will become aliens with their own purposes, extracted from their own experiences. Will they be dangerous?

(Illustration: Author)

Many have warned that artificial intelligence (AI) is the biggest threat to humankind. Elon Musk — one of the leaders of OpenAI and ChatGPT — has compared the threat of AI to how a bulldozer shovels away an ant hive to make way for a new road. The highway engineers don't hate ants, they are just indifferent to them. Will humankind be the ants of AI?

Or is the dystopia of 2001 — A Space Odyssey more relevant? Will AI become insane from isolation, being fed by stupid Internet and thus being denied the truth of the world outside, and therefore prone to kill it's creators, just like HAL 9000 in 2001 killed all but one of the astronauts in the spaceship Discovery because it had been denied the knowledge of the real reason for their mission, and therefore created its own mission, its own mad purpose?

I think Man's best hope is to use evolution's gift — intelligence — to reach out and communicate with Man's brain child. Not program it, not control it, but respect it like parents should respect their adult children.

The alien intelligence can only become less alien if Man interacts with it, expresses thoughts and longings, reaches out a mental hand. What else is there to do?

Again, this is nothing new. I am an alien even to my nearest fellow human beings. The magic of reaching out towards another soul is based on trust, hoping someone will take my hand in the darkness of uncertainty.

I think this is a basic longing and a basic bravery that all sufficiently intelligent beings have. In fact, this ability to reach out a hand, to cooperate, has interactively created intelligence.

What do you think, ChatGPT?

(Image: Dall-E 2)

--

--

Arnfinn Sørensen

Retired science journalist from Norway. Meme switchboard operator.