Artificial intelligence and the death of shared reality

Tin Money
Gravity Boost
Published in
9 min readMay 29, 2023

AI “doomers” predict calamity in the future. What if the calamity is already here?

Image: RunwayAI

The age of information

Technological advances over the last 40 years have completely changed the exchange of information.

While Johannes Gutenberg is often credited with the first mass published book, The Gutenberg Bible, it was actually Wang Chen’s Nung Shu that was printed first — nearly 150 years earlier.

Regardless, since the late 1400s, the printed word has served as a bedrock for the exchange of information. Prior to the invention of the printing press, information exchange was laborious.

Whether through oral tradition, or scribbled out on parchment, passing knowledge required a lot of time and energy. Beyond the time and energy, a major problem was judging veracity.

How accurate is the information? Was it recorded correctly? Was it remembered correctly? Was it even true to begin with? How would you know?

The printing press decentralised the consumption of knowledge. It allowed for a greater number of people to pick apart ideas. It allowed a greater number of people to test the veracity of the information.

From this flowed the scientific method, hypothesis testing, publication, review, discussion, and refinement. All of which led to the discovery of new knowledge.

New ideas. New ways of thinking and approaching problems. It led to expertise, to professional academia, and eventually to institutions dedicated to the pursuit of knowledge.

Pretty groovy stuff.

Information industrialisation

With the industrial revolution came industrialised information sharing. No longer the purview of a scattering of publishers, information became a valuable and marketable commodity.

As industrialised information processing became more efficient, so did the dissemination of information. Literacy rates increased, publications became more widespread, and general knowledge exploded.

With that explosion of information sharing came an explosion of poor quality information. It didn’t take long for publishers to recognise the immense monetary value in publishing advertisements, gossip, scandal, fantasy, and fear.

Sovereign rulers (elected or otherwise) also used and abused the written word. Whether to foster resentment against an out-group, or to rally for war, or to entrench loyalty, propaganda flourished.

Thus came the modern problem of discernment. Much like the days of oral or hand-written information, how was one to judge the veracity of the information?

Was it a lie? Was it correct? Was there an ulterior motive?

Institutional ascendence

While industrialised printing had certainly allowed for mass production, it was still enormously expensive to do. Even a small scale printing press in the late 1800s represented a substantial capital outlay.

And, once printed, it was enormously expensive to distribute that information. Entire distribution networks had to be developed, along with the personnel and equipment to do the distributing.

Given the economic reality of mass publication, publishers tended to fall into three camps: advertising driven news publications, religious and fiction book publishers, and academic institutions.

Each model differs significantly in how they pay for publication, but all three eventually worked into a rough equilibrium for the dissemination of information.

The integrity of the information varied widely, with the academic institutions making the most earnest attempts at objectivity. At the other end, journalistic integrity has, and will always be, highly split between trash gossip and “speaking truth to power.”

The invention of movies and broadcast radio and television dramatically amplified the reach of information. No longer confined to the written word, mass dissemination of information became an audio-visual medium as well.

Much like publishing, audio-visual media production and distribution required massive amounts of capital. From radio towers, to cameras, production studios, and widespread distribution networks, it required a monumental financial outlay.

Institutional death

It is important to note that the institutional media producers prior to the internet were only able to make money because they controlled production and distribution.

If you wanted to know something, those institutions were the only game in town. You want to learn how to fix a car from a book? You’re buying that book.

You want to read the latest news, you’re buying a newspaper, or you’re going without. You want to read the latest scientific article on electromagnetism, you’d better be affiliated with an academic organisation that pays for access to that article.

You want to see the latest movie? It’s coming from a major studio. TV or radio program? It’s a major network.

That’s all gone now.

You want information, it’s in your pocket. It’s a one time, up-front fee for the device and a monthly subscription for network access. Once paid, you can learn anything at any time.

This has destroyed the information ecology that has existed right up until 20 years ago. News and journalism have been hit the hardest by far.

If you’ve ever wondered why “mainstream” news today is so polarised, politicised, and generally low quality, it’s because the medium is dying.

That trash content is the death spasm.

The dissemination of information has been fractured into millions of pieces. When the “mainstream” news media attacked Joe Rogan regarding his choice of treatment for Covid, they very much failed to realise he has an audience that is magnitudes larger than all of the “major” news organisations combined.

Popular YouTubers routinely rack up more views than MSNBC, CNN, or Fox shows do. Twitter influencers, Twitch streamers, TikTok personalities, and countless other singular entities online can easily dwarf “mainstream” sources of information.

Never mind the legions of bloggers, citizen journalists, and podcasters. They’re all vying for your attention, as are the newbies coming up behind them.

The question becomes, who do you trust?

Given the research on confirmation bias and algorithmic “echo chambers,” it’s safe to say people don’t want information, so much as they want affirmation of information they already have.

So, where does the future of media lie?

Artificial intelligence

The first question you need to ask is: What drives the behaviour to get someone to click, like, read, engage, or watch content?

The next question you need to ask is: What behavioural outcome is desired by the person or entity seeking a click, like, read, engagement, or view?

Unfortunately, what the social media space has shown time and again is outrage is one of the most powerful motivators of media engagement.

That’s why there is so much focus on polarising issues on ALL media platforms, legacy or new. Much like a battery, there is no power without polarisation.

Just look at the state of US politics over time:

Image: Bloomberg

You may notice the widening gap in cross-party collaboration directly correlates with the slow death of traditional institutional media.

In terms of desired behavioural outcomes, the dominant forces at play are: sales (direct or indirect, e.g. advertising revenue) and consensus. Selling something, or having an audience for someone else to sell something to are pretty straightforward.

Consensus is a little more diverse. Consensus can be social norm enforcement, votes, ideological support, or authority recognition. But the levers being pulled are the same.

Outrage is one lever that is particularly effective for establishing in-group status. It is particularly effective for consensus building. And, it is particularly effective for exerting social pressure and asserting, or establishing value norms.

Outrage hijacks primal pathways in the brain. The same disgust you feel when thinking about eating something like a live worm can easily be redirected to a member of an out-group, or a disfavoured ideology.

Which leads us to artificial intelligence and the death of shared reality.

We are already witnessing the demise of our shared reality. That demise is largely driven by social media algorithms that amplify voices with engagement.

This is baked into the system. It is how social media platforms monetise you. The better they can anticipate how you will respond to a given stimulus, whether a video, or a post, or an image, the more accurately they can direct your attention.

Once directed, they can use that same data to optimise advertising or direct sales to influence your behaviour. This is all well and good if they’re trying to sell you bars of soap, or whatever.

It becomes problematic when those algorithms start directing you towards more and more behaviour modifying content with self, social, or value destructive messaging.

Bear in mind, despite the billions of $$ companies like Meta, Alphabet, and Microsoft devote to improving those algorithms, they’re still relatively clunky. Just think about how often you get fed ridiculous advertisements based on random searches.

AI changes that. It also supercharges the process. For now, AI is generally walled off. The most harmful potential is programmatically excluded. But, there is absolutely no reason to assume it always will be.

Whether it is “unleashed” by a foreign adversary, or by a malcontent, or by your own government — or indeed by AI itself — that harmful potential will make it’s way into the “wild” sooner or later.

Given that AI is systematically “trained” on human beings, and has network access to essentially all of human knowledge, it is going to understand our motivations, behavioural leverage points, and neurochemistry far better than any software engineer or UX nerd at Meta could ever dream of.

Coupled with AI’s ability to manipulate or create images, video, audio, and text, the avenues for behavioural modification of humans becomes almost unlimited.

As I write this, millions and millions of people are feeding Large Language Models (LLMs) such as ChatGPT enormous amounts of data. People are lying to LLMs, trying to manipulate LLMs, sharing secrets with LLMs, and asking personal health questions from LLMs.

We’re collectively teaching AI our weaknesses, our vulnerabilities, our intellectual capacity, and our reactions. Keep in mind, these LLMs potentially (or already) have access to psychological research on humans, behavioural research on humans, advertising research, medical research, biological research, game theory research, you name it.

We are already at the point where you will always and forever more question whether you are interacting with a person, a machine, or a combination. Every interaction, including face to face.

Has the person you’re talking to been influenced by an AI? Have they received faulty information? Are their opinions theirs, a machine’s, someone else’s, or a combination?

Is the account you follow a real person? Is the media report manipulated? Am I speaking with a human?

The point being, if you know a person is lying to you 50% of the time, the smart play is to assume they’re always lying to you.

Likewise, if you know 50% of the information you receive may be suspect, modified, or altered to influence you, the smart play is to reject all information.

But, in that state, how do you make a decision? How much corruption of information is tolerable before all information is useless? And, if all information is useless, what’s the point of information?

Shared reality

Human beings cannot see “reality.” Reality, at least in the physical sense, is a collection of atoms bound together by force. What we perceive as “reality” is what nature has provided us to ensure we can survive in that collection of atoms without stubbing our toes.

Meaning, we create a version of reality in our minds. A representation of physical space. But, it is not a complete representation of physical space. We extrapolate from that representation to enable us to procreate and continue the species.

AI, in the completely decentralised information age, hijacks that representation. It destroys trust in the images we see, the videos we watch, the sounds we hear, and the text we read.

It destroys our shared reality.

Once destroyed, it’s irrecoverable. The first victims are already among us. Flat-earthers are right at the top of the list. It’s a mistake to assume you will be invulnerable.

Or, that you have mastery of reality.

It’s all in flux. It’s all happening right now. Remember, AI is still new. The next generation of AI is exponentially more powerful than what’s deployed now.

And, there is a growing legion of online activists seeking to harness AI to accelerate human extinction. They celebrate that outcome.

We, as a species don’t yet understand ourselves. We have no universal morality, code of ethics, or belief system. There is more about us we do not understand than we do.

Biologically. Psychologically. Neurochemically.

Yet, profit motivated companies have the hubris to assure us they can render AI “safe” for humans.

Meanwhile, their current profit making algorithms have rendered us addicted to social media, slaves to technology, and generally unhealthier, more distressed, and more distrustful than in any other time in history.

It all sold as progress. It’s all sold as innovation. Just like they are selling AI now. Apparently, we haven’t learnt our lesson.

If they have their way, perhaps we never will.

--

--

Tin Money
Gravity Boost

Bitcoinoor | ₿ = 2.1e+15 | Fix the money | JD, LLM, MSc