ASK A PROMPT ENGINEER
If you want to learn how to spot AI, don’t rely on clowns churning out guides written by ChatGPT.
There are some bad guides out there that do more harm than good.
Before you trust another “How to Spot AI” guide, make sure it’s from someone who knows what they’re talking about.
My dander is up. I don’t normally get personal, but today I’m calling out the posers and the fakers. Because there’s something worse than AI generated misinformation: AI generated misinformation that disingenuously tells readers “How to spot AI misinformation”, giving all sorts of bad advice.
UPDATE 9/10/24: One of those accounts peddling viral fake “How to spot AI” guides that I exposed in this article and has been suspended! Thank you to the Medium team who safeguard this community, and the readers who reported it.
I hope this article goes some way to countering misinformation with good info.
While I rely on earnings from my writings (and usually paywall) I’ve opted to make this one a free article so everyone can benefit. You are welcome to share.
I’m here to tell you: if you want to know how to spot ChatGPT, ask a prompt engineer or a computer scientist. I’d like to add to that list people who work in publishing, but honestly, if they’re not also experienced in LLMs, they are as vulnerable as everyone else to falling for AI. I’ve seen some well-intended but poor advice from literary agents on AI. And let’s not forget, Japan’s top literary award was taken out by a novel written using ChatGPT.
Where to Get the Real Scoop on AI
There are many reputable sources on Medium. Always check the provenance of any article you read, and the credentials of the writer. I recommend publications such as The Generator, edited by Thomas Smith, and Generative AI, edited by Jim Clyde Monge, and Towards AI and Towards Data Science. Alberto Romero’s The Algorithmic Bridge on Substack is a great resource.
For reliable information on AI, you want it to be written by a specialist with experience, edited by someone with AI knowledge, in a publication in the field (I love Illumination as much as everyone else, but they don’t focus on this topic). Read everything else with a grain of salt. Or an entire sea of salt.
Bogus AI Guides Are Making Me See Red
I’m fed up with low-quality guides spreading misinformation on this issue, usually written by spammers who — ironically — have used chatGPT to do it!
While some are just misguided amateurs — often people at an early stage of infatuation with AI making claims with naive enthusiasm (reminiscent of the Eternal September phenomenon, where the constant influx of new, inexperienced users to online spaces degrades the quality of discourse) — there are others deliberately exploiting the situation for clicks and profit.
Well I’m done. Now I’m kicking ass and taking names. To paraphrase Moss from The I.T. Crowd (my favorite TV show), “I’m all out of strawberry milk”.
Today I saw three offenders in my feed, but one stood out. Now, I wouldn’t normally call out another writer. But it’s not a member of our community.
It’s a probable scammer, with a 7 day account. And a near identical version of the same “How to spot AI guide” was published days later by another suspiciously new account. It’s our real community that I want to protect.
Watching innocent readers fall for it, nodding along, makes me want to scream. There are nearly 90 comments, none of which are aware of the ruse, and over 3.5K claps from 180 readers who have been duped. Through no fault of their own, these people have been hoodwinked. Are you among them? The sad part is that those readers now think they’re armed to detect AI, filled with false confidence, when they’re actually less able to spot AI-generated content than they were before they were fed the misinformation.
These readers have unwittingly absorbed flawed advice. It’s a dangerous cycle of misinformation. Learning to identify AI-generated content is a basic skill we all need now. They’ve done the right thing trying to educate themselves, but how do you spot that the guide you’re relying on is actually by AI when you don’t yet know how to discern it? It’s a meta-level problem.
But does it really matter if content is AI written? Yes. While I believe that AI has the ability to make communication more effective, I’m a firm believer that AI content should be disclosed. But there’s a particular reason why AI generated guides on spotting AI are insidious: they’re about AI’s blindspot.
Put simply, how could an AI generate anything of value about limitations it’s unaware of, and unable to avoid in its output? If it knew what the ‘tells’ were, it wouldn’t generate those tells in the first place. That’s the paradox.
Don’t Get Played by AI Praise
AI doesn’t know what it doesn’t know. Instead, when asked about signs, it spews generalities that reflect commonly held anti-AI sentiments, which ironically are the reassuring cliches that readers unfamiliar with how AI work like to hear — even if that advice is flawed or misleading. They want confirmation for their preconceptions, which is exactly what AI written guides about AI provide. People are drawn to information that aligns with their preexisting notions. It’s a perfect example of how AI misinformation is self-perpetuating and undermines genuine efforts to educate the public.
“Soul”, “Wit”, and “Personality”: The Ego Pitfalls in Spotting AI
For example, people love to be told that AI can’t write authentic personal stories, show ‘soul’, or be funny, because those are things they think they can already spot, are matters of taste and refinement, and which reaffirm their belief in the superiority of human expression. Writers in particular don’t want to hear that an LLM might produce better work than any of us are ever likely to because it has access to the near-entirety of the collective cultural consciousness. So they take a certain delight in the idea that an AI lacks “soul” or “depth”. It strokes our egos. But this reassurance is a trap. In reality AI can mimic these traits very well, especially if prompted correctly.
Personal writing styles can be uploaded into custom settings, knowledge bases, or memory, using either your own previous work or cribbed from others (or notable works in the training data, such as Sarah Silverman’s autobiography). Trust me: I’m an expert in Tone of Voice. I wrote the original prompts back in 2021 that inspired Jasper AI’s voice feature.
With access to a couple of her emails or text messages, I could make AI sound like your own mother. This is a real problem we’ll soon be facing with AI-scammers. This is why I’m so angry at these misleading guides.
Thinking AI Can’t Be Funny Makes You an Easy Target
Humor is another quality that we like to believe is exclusively human. But a 2024 study found 70% of people in a blind-test ranked jokes by ChatGPT as funnier than those written by humans! If you’ve been badly taught that AI can’t be witty, then you’re less likely to scrutinize an article that makes you laugh for signs of AI, because it couldn’t possibly not be by a human, while paradoxically you’re more likely to find it funny because it’s written by AI!
I’ve actually exposed some humour articles on Medium previously that are capitalizing on this very paradox. They’re so funny that people are blind to them being AI, even after I managed to replicate the process with a GPT you can try yourself. In fact, in the comments on my exposé, people still believed I had tweaked it — despite me showing exactly how it was done:
But a story that really takes the cake? A colleague of mine rather cheekily posted some soulful, inspirational AI content to Facebook and went viral.
Within a week, someone contacted my friend with a photo of the AI quote tattooed on her wrist. I suspect this naive soul would be aghast if she knew the words of empowerment that touched her so deeply were algorithmic. Of all the words she could get tattooed: she picked the AI generated ones.
Misplaced Confidence is Your Biggest Obstacle to Spotting AI
Relying on how ‘human-like’ something sounds is a terrible way to spot AI. Don’t just take my anecdotal evidence for it; there’s solid research to back this up. According to a recent study from Cornell, people wrongly thought GPT-4 was human 54% of the time in a Turing test. In other words, if it walks like a duck and talks like a duck, it still might be a damn chatbot.
So what we’ve established is that people are incredibly bad at spotting AI. Ironically this just makes them more susceptible to AI guides that are — in turn — reinforcing AI detection tactics that don’t work. ChatGPT is clueless about its shortcomings in any self-aware sense, and the writers are entirely reliant on AI for the guide’s tips. At best, it echoes their own ignorance, at worst it’s a mendacious attempt to spread and profit from disinformation.
Misleading AI Guides Make You Easy Prey
These ‘experts’ peddling faulty AI detection tips are turning your genuine concerns into their personal gain show. If you’re relying on charlatans to teach you about spotting AI, you’ve just been caught in an absurd loop. It’s time to see these guides for what they really are: ineffective and dangerous.
The only thing worse than being uninformed is the misplaced confidence that comes from misinformation. When you believe you’re immune to AI deception because you “know what to look for,” you’re more likely to fall for it. The false sense of security bred by fake AI detection guides makes you less vigilant, less critical, and ultimately, less adept at spotting AI content.
I’m done watching this farce unfold without comment, and now I’m on a mission to clean up the mess, one misleading manual at a time. But how can you learn to spot a bogus guide when it’s disguised as helpful advice?
Fake AI Guides Are About to Get Torched
Again, while I previously would’ve been reluctant to single out a specific article, the stakes are high, and the disinformation has been inflicted on thousands of readers. Additionally, one of the tenets of free speech is that published ideas are not immune to criticism; they can be tested in public.
Here is the case study. Reader beware. You don’t need to click on it — let’s not give them any more earnings or attention. I’ll give screenshots instead.
(That terrible article was then rehashed — most likely just a lazy second output — by another suspiciously new member in the same publication. Sadly, both are gaining traction, despite blatantly disingenuous tactics)
How to spot AI “How to Spot AI” guides. Yes, really. That’s how bad the situation is. We’ve come to this.
There are three main ways to spot AI generated “How to spot AI writing” guides — and two are a bit different to how we usually identify general AI content, because there’s an element of catching a dog chasing it’s own tail.
Why AI Detectors Aren’t Enough
I’m intentionally not including AI detectors on the list, as they’re an arms race; I personally believe in teaching AI literacy and critical reading as the real defence against AI deception. Sharpen those human “soft skills”, and you’ll be spotting AI fakes long after the detectors have become obsolete.
Method 1: Trust the Experts
The first is to use a credible guide to tell-tale signs of AI, and see if any of the red flags are there. I’m going to recommend my guide, but any from the aforementioned publications and reputable authors will do. If you want the real scoop on how to recognise AI, you don’t ask the clowns playing at experts; ask someone who actually knows how the circus works. Again, provenance. Check your sources. Approach 7 day accounts with caution.
I have to hand it to the “writer”: most of the obvious AI terms like “delve”, “dive”, and “in today’s fast moving world” have been scrubbed. I strongly suspect that it’s ChatGPT content that’s been run through a humanizer; likely Undetectable AI. However, to experienced prompt engineers, the procedural rhetorical patterns of AI are still visible. I’ll write a guide on this subtle distinction shortly; it’s a bit more advanced. My guide does already contain some pointers to this, mainly in procedural sentence structures.
This means it also throws off AI content detectors like ZeroGPT (which are often unreliable). A humanizer works like a content spinner; switching out known culprits for lesser used words by altering the perplexity of the piece (basically, widening the temperature of the probability of the next word to appear in a sentence). However, the burstiness — the tempo of the writing style — remains much the same. There’s enough to be suspicious of, and I think anyone reading it with a critical eye might sense something hinky, but ironically, if you’re skilled enough to notice it, you’re not at any risk.
Method 2: The Tell-Tale Absence of Tells
The real smoking gun is in the second method, which is specific to bad guides themselves. So, recall how LLMs are unaware of their actual tells? This means a fully AI-generated guide won’t divulge any of the well-known red flags of AI text. It thinks they’re part of a perfectly normal vocabulary. It’s completely unaware, like a thief who doesn’t know about fingerprints.
And sure enough: there’s no mention of delve, dive, discover, navigate or any of the usual suspects. Instead, it’s filled with generic suspicions of robotic sounding text, and reassuring platitudes that only humans sound human.
The omission of any AI tells is itself a tell. Even basic ChatGPT users would be familiar with the overuse of words like “tapestry”, but this article is not. The lack of insight is also a clue it hasn’t been written with experience. Not to blow my own trumpet, but the best writers on AI are also investigators:
Method 3: Reverse Engineering the Guide
The third and final way to spot an AI generated guide on spotting AI text (seriously, my head is starting to hurt from the hall of mirrors) is to try to see if you can easily replicate it with ChatGPT. This method is useful when querying any content, but it’s particularly effective with topics such as this one, where there’s a predicable catechism that AI will fall back on. Simply prompt what you think the “writer” might’ve asked to reverse engineer it.
While there are some spectacular prompts out there with multiple points of articulation, the type of scammers we’re talking about tend to put in the least possible effort (and in fact, you don’t have to worry about the former, as the output is of a much higher quality due to the increased effort). Let’s run a prompt and put the output right next to the passages from the guide:
Guide on how to spot AI generated text
I’m willing to bet that’s as little effort as was put in (before running it through a humaniser to throw AI detectors of the scent). Let’s see if the structure and key points line up. ChatGPT on the left, guide on the right.
The similarities are uncanny. Obviously, there is some variation — as there would be even between individual regenerations in AI chats. However it there wasn’t a single point in the guide that wasn’t already in the output.
There’s Nothing Like Firsthand Experience
I invite you to try out replicating an AI guide yourself on ChatGPT. Don’t trust what it says, but familiarize yourself with what the output looks like.
Get a sense for the structure too, both of the overall piece, but also for individual sentences and where they appear in output. For example, in every trial run, the conclusion always ended in some version of “By understanding/paying attention to these clues/patterns, you can…”.
I encourage even the staunchest anti-AI crusader to get their hands dirty. There’s nothing like experience with an AI to inoculate yourself against AI. Soon you’ll be outsmarting algorithms with nothing but your gut and good sense, proving that the best AI detector is still a well-trained human mind.
AI Isn’t the Villain Here
Now, let me be clear — there’s nothing inherently wrong with AI-generated text. I’d be a hypocrite otherwise, considering I’ve made a career out of teaching people how to better engineer prompts for advanced output. To paraphrase Oscar Wilde: “There’s no such thing as good or bad AI writing. It’s all about how it’s used — whether responsibly or recklessly. That is all.”
I don’t want to encourage a virtual AI book burning. AI is great for refining your rhetoric and brainstorming (but for goodness’ sake: NOT research). I personally use AI to assist the “tip of the tongue” problem with my aphasia where I know a word but can’t locate it, and I have to describe the shape of it (regular dictionaries aren’t great if the first letter eludes you; previously I used thesauruses— I have a shelf of them for this purpose — but LLMs have been a godsend for me). I’ve written a custom GPT to help people put into words their needs for phone calls and emails if they struggle with anxiety:
I also believe people can use AI to construct better writing, much as one might with a human collaborator or editor; or using AI as a beta reader (there’s much overlooked potential here for publishers). My problem is when it’s used to deceive, as a cheap and nasty shortcut to Easy Street.
You’re going to be spotting a lot more AI with these skills. And that’s okay. Ask yourself: what is the intent here? Is it better communication? Or is it just another lazy attempt to cash in on trends without putting in the work?
Stay Sharp, Skeptical, and Curious
Watch out for confirmation bias — the same mental trap that convinces your friend they’re a wine connoisseur because they identified a Merlot one time at a party. Ask yourself if the guide is actually challenging your assumptions, or just flattering you into believing you’re already an expert.
The former is a sign of an authentic, well-researched article, but the latter could very well be an AI-generated guide. If that’s the case, you’d be better off getting advice from a Magic 8-Ball. At least then you’ll know it’s bullshit.
A note from the author
I’m going to turn off my earnings on this article (yes, even if it gets a Boost, which means I’m potentially sacrificing four figures) because I genuinely believe it’s a public service. If you’ve found it helpful, please share it so we can combat those preying on unsuspecting readers with garbage AI guides.
Those clickbait con jobs have gone viral. Consider this the vaccine drive.
Who is Jim the AI Whisperer?
I’m on a mission to demystify AI and make it accessible for everyone. I’m passionate about educating people about the new cultural literacies and critical skills we need to recognize AI online. Follow me for more advice.
Stay Updated and Engaged
Subscribe to get an email when I publish. Also check out my previously published guides on Medium. You’ll learn how to use AI platforms more effectively, and hone the skills needed to recognize AI content in the wild!