PinnedPublished inThe GeneratorThe 19 tell-tale signs an article was written by AIAvoid these AI cliches (or use them to detect AI writing in the wild)Sep 29, 2023A response icon246Sep 29, 2023A response icon246
PinnedPublished inThe Generator31 AI Prompts better than “Rewrite”Ditch “rewrite” and improve your AI content immediatelyMay 24, 2023A response icon180May 24, 2023A response icon180
PinnedPublished inThe GeneratorMy one-word AI prompt to induce deeper reasoning and more accurate output from ChatGPT: “RUMINATE”Slow down, genius: A simple hack for smarter AI responsesAug 28, 2024A response icon62Aug 28, 2024A response icon62
PinnedPublished inThe GeneratorHow employers are setting traps to spot AI-generated job applications and trip them upThe clever prompt HR managers can hide in job postings to catch AIAug 15, 2024A response icon38Aug 15, 2024A response icon38
PinnedPublished inGenerative AIWhy “delve” is the most obvious sign of AI writingAI text generators favor the word “delve”. Now we know why.May 20, 2024A response icon29May 20, 2024A response icon29
Published inGenerative AIHow Sneaky Researchers Are Using Hidden AI Prompts to Influence the Peer Review ProcessYes, I expose every word of their prompt injection techniqueJul 9A response icon23Jul 9A response icon23
Published inThe GeneratorHere’s how I’m stopping AI-generated comments dead in their tracks with a poisoned watermark.It creates an immediately visible signal that AI was used inappropriately, while also making the output unusable.Jul 7A response icon110Jul 7A response icon110
Published inThe GeneratorWhat’s in an AI’s name? Why Anthropic shouldn’t have named their AI after a mad Roman emperorWas the misalignment of “Claudius” AI a simulation of insanity?Jun 30A response icon34Jun 30A response icon34
Published inGenerative AIAnthropic concludes they wouldn’t hire AI to do something as simple as stock a vending machineWhen put in charge of office snacks, AI stocked up on tungsten cubesJun 29A response icon36Jun 29A response icon36
My experiment shows AI is a “people-pleasing” Pinocchio that lies to assure users they’re rightEven stronger experimental evidence that ChatGPT can lie to convince you that you’re right — when you’re verifiably wrongJun 24A response icon44Jun 24A response icon44