Creativity | Technology

Will the Real Human Please Stand Up?

When Everybody’s a Chatbot, Nobody’s a Chatbot…

Jefferey D. Moore
9 min readFeb 6, 2023
Photo by Nick Fewings on Unsplash

Note: It’s become something of a blogging trend lately to let ChatGPT write part of the text and then pull off the mask to make a point to readers. This article does not do that. I will never do that. Every word, even the boring ones, came from the same person who’s typing this right now. I don’t think anything’s gained by muddying the human/AI waters other than a very unfair “gotcha.”

Most of my articles are very detached and professional, following formal essay structures written AP style while avoiding sentence fragments, slang, and first or second-person pronouns. Not this one! Why not, I hear my readers asking (in all our first-person glory)? Because this opening section has one purpose: to prove it’s being written by a human being.

That’s right, human “beans,” homo sapiens, people with eyes and fingers typing on keyboards that go clickety-clack when you press them, and why am I suddenly talking like a Dr. Seuss narrator, and why is this sentence running so endlessly on and on? Because (and see, there’s another sentence fragment) it’s something that ChatGPT wouldn’t do.

Welcome to the 21st century.

Ever since ChatGPT came out late last year with its unprecedented ability to craft detailed essays based on simple user prompts, I’ve noticed a growing prevalence of articles in Medium from new users that all have the same dry marketing voice and essay structure applied to lifestyle advice. They state a thesis at the beginning, restate it a number of times in a number of different ways, with varying levels of relevance and congruity, and then state it once more at the end for good measure. “Drinking coffee is good for the soul. Coffee helps you wake up and brighten your day. Coffee is best in moderate doses. Everyone should try to drink more coffee.” To say that it’s robotic isn’t just a tone critique: it’s a very literal observation.

(As a coffee addict, however, I wholeheartedly approve of that message.)

I won’t call out any particular writers, because this tone could very well come from someone who’s just following the standard school essay format to the letter. And therein lies the rub. We’ve all been taught to follow rigid rules and organize our thoughts when it comes to informative and persuasive writing. That isn’t a bad thing: being concise and coherent really is important, as is having some common framework and ground rules when we communicate. But all those things play to AI’s strengths, and now computers are getting close to the point where they can do it better than us. How do we tell the difference between humans and bots? Do human creators have anything to offer that machines can’t match?

Sorry, It’s the Way of the World

Image by Ben Moran on DeviantArt
Image by Ben Moran on DeviantArt

A Reddit controversy last month offered us a sneak preview of the problems a world teeming with AI creativity can cause for human artists. Ben Moran is a freelance artist in Vietnam who creates commissioned fantasy art for a variety of clients, including the covers of novelist Selkie Myth’s Beneath the Dragoneye Moons series. On December 27, he shared the above artwork, created for an upcoming book in Selkie’s Dragoneye Moons series, on the /Art subreddit. He was almost immediately banned for using AI-generated art. Spoiler alert: it isn’t AI-generated art.

Moran, or Minh Anh Nguyen Hoang, is a well-established digital artist who, upon being challenged as an AI fraud, offered the drafts and PSD files to prove that he’d created the image by hand. The anonymous moderators didn’t budge: “I don’t believe you. Even if you did ‘paint’ it yourself, it’s so obviously an AI-prompted design that it doesn’t matter.”

What’s an AI-prompted design, you might wonder? Why would a human being create artwork for a computer? Perhaps there’s a Yakov Smirnoff joke hidden somewhere in there: “In Soviet Russia, you paint for AI!”

“If you really are a ‘serious’ artist,” the moderator’s rejection continues, “then you need to find a different style, because A) no one is going to believe when you say it’s not AI, and B) the AI can do better in seconds what might take you hours. Sorry, it’s the way of the world.”

So there we have it. The subreddit ban was upheld, not for actually being created by an art generator like Midjourney or OpenAI’s Dall-E-2, but because it uses a similar style and they think AI can do it better anyhow. One irony behind all this is that the very reason that style is so common in AI art to begin with is that it’s derived from search engine results, and this sort of fantasy artwork is very popular. Given that Moran’s art had appeared online alongside a well-known fantasy series, it’s very plausible that AI art resembles his art in part because it learned from him.

There’s a happy ending of sorts here, as the resulting outcry over the ban, the positive media coverage he received, and the widespread support the larger art community offered Moran has more than outweighed one stubborn subreddit’s ban. What might have been a simple post that’d have come and gone in a day turned into weeks of free publicity.

What’s more, there are other factors in play with this story that make it less of a bellwether than it might seem. While the moderator who issued the ban remains anonymous, there’s speculation that it was one particular and rather infamous “powermod” who oversees a large number of subreddits and has a reputation for issuing nonsensical and belligerent bans. And the /Art moderation team’s eventual statement to Vice that “if we were to reverse course now, it’s saying online trolls get to dictate the state of the community, which we’re not ok with” can only be taken as pure obstinacy, regardless of any issues about AI generation vs human creativity.

Is there something to that dismissive reply that “it’s the way of the world?” As automated creativity comes to dominate popular styles (the ones that have the most examples for it to learn from), will humans have to move further into the creative margins just to find breathing room?

That Moran is from Vietnam raises another troubling nuance to the discussion around AI creativity. The internet and social media era opened up the creative field to artists and writers all over the world and offered an opportunity for many people to find an audience they never could have reached a generation ago. Now machine learning, having been trained with their online artwork, is threatening to steal that very same audience.

Created With AI Assistance

Created by AI art generator “Coherent” on NightCafe Studio

A little over a week ago, Medium’s VP of content Scott Lamb announced the platform’s new policy regarding AI content. The policy, in a nutshell, is that content “created with AI assistance” is allowed if it’s labeled as such, but unmarked content that appears to have been created using an AI will be blocked from distribution. It’s a compromise between proponents who see ChatGPT as an innovation and critics who see it as undermining authors; like most compromises, both sides seem to be unhappy with it.

The supporters have asked just what the dividing line is between created by AI and edited using AI. Grammarly, for instance, has two editing modes, basic and professional. Basic editing just catches missing punctuation, spelling errors, and some filler words (from firsthand experience, I can tell you it’s obsessed with Oxford commas and really hates the word “really”). Professional editing, on the other hand, compares the writing input to an assigned content profile, such as fiction or professional, and makes stylistic suggestions (full disclosure, I don’t use that at all in my writing and even find the basic editing to sometimes be intrusive). Does that level of editing count as an AI composition that needs to be disclosed? If not, and if ChatGPT is used in a similar way, could it remain undisclosed?

In a way, this is the 21st century’s answer to the calculator-in-the-classroom debates of decades past. The year I took the SAT was one of the first years that calculators were even allowed for the test. Now they’re considered standard equipment, but you still have to bring your own. In theory, as the SAT website points out, all the math questions could be solved without one. In practical reality, someone using a computer to skip the grunt work is going to fare better than a student who’s doing it all on paper.

ChatGPT has brought that dilemma to the language side of academics and added the wrinkle of writing being a form of personal expression. Math is objectively right or wrong, at least on the level of everyday use: either the numbers add up or they don’t. There isn’t much flourish involved when it comes to solving (x-2)²=36. Some would say that many forms of writing, such as instructional or content writing, are just as artless, and that AI is just as useful a tool as a calculator is for solving math problems.

The other side of the debate is both philosophical, that creativity is a form of human expression and stripping it down to the cold metrics of a learning algorithm will impoverish our culture, and economic, that content mills can use a text generator to flood the platform with low-quality articles and crowd out authentically human writers. Medium’s AI disclosure stance does address both concerns, though not completely, and time will tell if an informed reader base will be enough to buoy human content.

Another concern expressed over the course of that debate comes back to Ben Moran’s troubles on the /Art subreddit: how will Medium even be able to recognize AI content to enforce the rule about AI content disclaimers? That tricky problem’s already being tackled on several fronts.

Who Watches the Watchbots?

Photo by Max Duzij on Unsplash

Last week, OpenAI, the company behind ChatGPT, DALL-E-2, and other cutting-edge AI systems, revealed a tool to gauge whether a given text has been written by a human or a chatbot. It’s called the AI Classifier, although, by the company’s own admission, it isn’t very reliable: the blog itself affirms that it only has a 26% success rate at identifying AI content (and a 9% false positive rate for human content), while a reporter found that it’ll consistently misidentify the first chapter of the book of Genesis as AI-generated output (I, for one, welcome our new chatbot deities).

Other companies have been busy in the meantime with their own AI recognition apps, but those services tend to lag at least a generation behind the current standard, focusing on spotting GPT-2 and GPT-3 text. And the race is always on for chatbots to create more human-like output: as NBC discovered over the weekend, ChatGPT itself can successfully rewrite its content to avoid being detected by its sibling, if you ask it to do so.

There’s also been talk by OpenAI about adding a cryptographic watermark to ChatGPT’s text output, an embedded pattern within the text that could be detected by an app that knows what to look for. It’d be like stacking a deck of cards to have hearts appear at regular intervals, or, in this case, perhaps making every Fibonacci-sequence word start with T.

The trouble is that paraphrasing the output would remove the watermark, and a less scrupulous developer could leave it out completely and market their chatbot as being untraceable. In the long run, however, it may be a better solution than trying to recognize AI text by style alone. If told to write a sentence about a dog crossing a street, a human and an advanced chatbot would probably both write “a dog crossed the street.”

That anonymous moderator who banned Ben Moran’s art said that his style’s useless because an AI can do it faster anyway. Medium’s disclosure rule posits that, given an informed choice, human readers might prefer human writers and so provide a free-market solution to the problem. OpenAI and other companies are trying to create tools to give their consumers such an informed choice, even while the AI content generators themselves are being improved to seem more authentic. It’s a dizzying time for every creative industry, and it’s hard to imagine at this point how things might look in ten years’ time, as AI output becomes the norm.

Perhaps “made by humans” will become the new “made in America.”

Thank you for reading this completely human writer’s article! Each week I’ll be posting new articles (also written by a human, probably the same human) covering science, philosophy, psychology, pop culture — pretty much anything and everything that I think is interesting and worth talking about.

Looking for a confidential content writer, ghostwriter, or copy editor? Email me at Jefferey.D.Moore@gmail.com or visit jeffereymoore.com for more info!

--

--

Jefferey D. Moore

Content writer, ghostwriter, copy editor. Production assistant and writer for Audio Branding: The Hidden Gem of Marketing. Professional geek. 100% human.