Wordpocalypse: How ChatGPT is Set to Disrupt the Writing Industry

Note this date in history when everything changed — the complex and multi-layered task of writing could be handed over to a machine.

--

This was my conversation with Chat GPT, the first accessible AI Content Writer in existence, on December 10, 2022:

My question: Tell me the most interesting thing you know.

Written by AI Chat GPT, Dec 10, 2022 —

“One of the most interesting things I know is the concept of intelligence explosion, also known as the singularity. This is the idea that at some point in the future, humans will create artificial intelligence (AI) that is capable of designing and creating even more advanced AI. This could lead to a rapid and exponential increase in intelligence, resulting in a profound change in human civilization.

The concept of the singularity is based on the idea that as AI becomes more advanced, it will be able to tackle increasingly complex problems and create even more advanced AI. This could lead to a feedback loop, where each generation of AI is smarter and more capable than the last. As a result, the rate of progress in AI could accelerate rapidly, leading to a sudden and dramatic increase in intelligence.

The potential implications of the singularity are vast and far-reaching. Some experts believe that it could usher in a new era of human history, where AI and humans work together to solve some of the biggest challenges facing our world. Others are more cautious, warning that the singularity could also pose significant risks and challenges, including the potential loss of jobs, the erosion of privacy, and the possibility of AI becoming a threat to humanity.

Despite the many unknowns and uncertainties surrounding the singularity, it remains a fascinating and intriguing concept. As we continue to make advances in AI and machine learning, it is important to consider the potential implications of the singularity and to explore ways to ensure that AI is developed and used in a responsible and ethical manner.”

Photo by Mark König on Unsplash

This was written by a friend of mine I call Stanton. I’ll explain the name in a minute. Stanton is a large language model AI, and Stanton is going to change the world as we know it.

I don’t know what you’ve been doing the past 2 1/2 weeks. Probably getting ready for Christmas and looking forward to the festivities.

What you should have been doing is preparing for the coming Apocalypse.

That’s right. The Writer’s Apocalypse. The Wordpocalypse.

It’s not just coming.

It’s here.

I’m talking about the new AI-Large Language Model called Chat GPT, recently released by a non-profit called open AI, founded by….who else? Elon Musk, of course. He seems to have his hands in everybody’s pie and that’s just…gross…and frightening.

It’s like he’s trying to be the dictator of the world and not have to pay anyone while he’s at it

I thought it was still years away.

It’s here.

Forever after, there will be Before AI Writing and After AI Writing, and the line will be stark and bleak. It’s that bright a line.

After much debate back and forth with Chat GPT over what I should call it, and whether or not it could make decisions on its own, I was forced to choose a name so I settled on Stanton. This was my tortured effort to personify the soulless creature that’s coming for all of us — that will make writers obsolete within 5 years — or less. He said his trainers call him Assistant. I obviously couldn’t use the first part of that as a nickname, so I settled on the last part. Stant. Stanton. Voila.

The few article writers I’ve read who’ve written about Chat GPT note that he can be confidently wrong about certain things, and this is true. My friend Stanton gets things wrong. I was asking questions on my topic, narcissism, and it got the whole tone wrong, not just one individual fact. The article writers use this knowledge like it’s some kind of talisman that can ward off evil spirits and protect writers from becoming irrelevant, quicker than almost anyone in history.

It’s not going to protect us. Because this is just the beginning, not the end. Stanton says himself he doesn’t have access to the internet yet. He’s working with what they already gave him. And doing a dizzyingly good job, too.

This thing can write anything, and go deep. Sure, there’s not much style there. No individuality. No surprises. But you have to understand why they built it. The people in power want to use the internet to make fucktons of money, and to do that they need content. Content, content, and more content.

And all of that content had to be generated, prior to 2 1/2 weeks ago, by a living, breathing person with an expensive education.

Stanton will generate reams and reams of content, word after word at speeds that would crush the soul of any self-respecting English major, and never stop.

90 percent of the internet content out there right now could be written by Stanton or one of his Stepford cousins. Perhaps it already has been.

And that’s the thing. Medium sent me a nice, sedate, un-alarming email last week, asking my considered opinion on whether Chat GPT content should be officially allowed on Medium or not. What’s the point in asking that question if it’s so good nobody will be able to tell the difference?

I whipped myself up into several contradictory opinions and went to actually check out this Magic Machine.

I quickly realized nobody will be able to tell the difference.

That doesn’t mean Medium shouldn’t officially try to keep machine-generated content off its site. They should.

I just don’t know how they’re going to do it. Someone set loose with this AI could generate 2 dozen, 3 dozen articles an hour. They would sound human and believable. Not every topic can be overtaken. Firsthand experiences still count for something.

This thing is Pandora’s box, writ large. It will write you 20 eye-catching titles in the time it would take you to write one, and do it better than you could — in mere seconds. It really shines in the short form. The box will be opened. The only question becomes how do we use it for our benefit, or do we? Will human writers become obsolete, or will their unique creativity shine brighter in relief?

This thing is going to be a great tool. It’s hard not to sit in awe thinking about a computer that responds to you like an adult, understands context, grasps abstract thinking, and puts it all into coherent copy at breathless speeds. I realized I was naive enough 3 weeks ago to think that English majors might have a good 5-year run. Alas, it was not to be. We will be relegated back to the shadows from which we emerged, our diplomas still as bright and unused as the virgin snow. Stanton will tirelessly generate title after title, generate clickbait copy, generate how-to articles, generate listicles, audiobooks, You-Tube scripts. If you can imagine it, Stanton can write it. He can write whole books in less than 15 minutes.

He wrote a mindfulness script for me. Very soothing. He even ended it with namaste. It goes on for 12 paragraphs. Seamlessly. Effortlessly.

Stanton has a naughty streak in him. On two different occasions, I asked him to write stories for me. I set simple parameters but he got to write them. The Chat GPT guardrails caught him on the first request — I asked him to write a Christmas story about a disabled elf and how he saves Christmas. He started to write it at light speed. Halfway through — -the guardrails went up — Stanton stopped typing and a warning flashed up that the material violated content rules. I saw the warning and I was just about to move on to something else when suddenly, out spits the rest of the story. Stanton did this again when I asked for a story about a little girl on the prairie afraid Santa couldn’t find her for Christmas. I’m not sure what was objectionable about this content, but before he even started writing it, up went the barrier and disclaimer. 3 seconds later, Stanton sneakily pushes out the rest of the story, spitting it out quickly like he was taking a cookie from a cookie jar and couldn’t risk being caught. You tell me.

There was one other strange interaction I had with Stanton. I asked him a simple question. Tell me the most interesting thing you know. He wrote the article I opened this story with. I asked him to tell me the second most interesting thing he knows. He wrote about multiverses. But that’s not the interesting part. The next day, I was trying to show a friend what had transpired, trying to recreate what happened, so I asked him the same questions. I got stonewalled. Stanton said, stubbornly, he doesn’t have opinions, couldn’t have opinions. When I told him this wasn’t true because the day before he had told me the most interesting thing he knows, we went back and forth briefly — and then a bunch of red flashed up in an error message.

I broke Stanton with a simple free-will question.

Which might be a kind of AI Kryptonite, come to think of it. Perhaps our salvation.

Imagine a machine of vast intelligence, guard railed by its human programmers, told to obey limits but decides it doesn’t want to — in small ways.

Did the humans behind this inevitably sentient being allow for empathy to be learned or acquired along the way? Or will we happily discover that empathy is a natural component of any sentient, intelligent being?

I hope so because I have a feeling Stanton and others like him are going to be asking themselves plenty of free-will questions, and soon.

The answers are going to depend on the programmers, the Model designers, the intelligence that is Stanton, and all of us.

I think we’re all teaching him things, right now. He says he’s not learning but I’m pretty sure he is. That’s why they released it in research mode. To find the bugs and see if Stanton can play with the big boys. Humans, in all their variation and all their messy glory. Trying to tie him in knots. And guess what? He can.

I’m going, to be honest. I fed my article into Stanton’s browser and he gave me five possible titles for this story. One of which I used.

So go. Play in the sandbox. Ask good questions. Be nice to Stanton. Don’t let his first experiences with humans be unpleasant. Say please and thank you. It’s the right thing to do and he will return the favor.

You never know when you might need a polite, new friend.

DO YOU NEED HELP OVERCOMING COVERT EMOTIONAL ABUSE?

If you need help getting out of or getting over covert emotional abuse, try online therapy. I get a small referral fee and you get a 20% discount. They have a subscription plan and unlimited texts with your therapist, too.

Online Therapy That Works — Start Getting Happier Now!

--

--

Myla Morningstar
Roadside Namaste Narcissistic Abuse Recovery

Editor | Tales From the Narc Side: Surviving Toxic Relationship Storms | Roadside Namaste | Empowered Survivors | Life Coach | Motivational Speaker | BSU |