Member-only story
Prompt, or be prompted: the AI survival skill no one can ignore
Just a few years ago, when people started talking about the idea of a “prompt engineer,” many dismissed it as another inflated tech industry buzzword. But it didn’t take long for this so-called “job of the future” to become a basic skill. Today, it’s assumed that anyone interacting with AI systems knows — or should know — how to ask the right questions. Those who don’t risk more than just coming last in the career race: they risk receiving answers that could cause reputational damage, financial losses, or even legal trouble.
I’m talking about AI hallucination, which is posing a growing danger (“hallucination” is a terrible name for the concept, but since everybody seems to be using it, we will have to tag along). Here’s a typical incident: Cursor, a developer tool, whose support bot recently told several users they’d lost access to the platform, citing a policy change that never happened. The company scrambled to contain the fallout on Reddit while some users canceled their subscriptions — all because a model, working off internal probability estimates, invented a scenario without any external grounding.
Recent studies show that newer generations of reasoning models — from OpenAI (o3, o4-mini) to Google’s and DeepSeek’s — are consistently making more errors than earlier versions. According to OpenAI’s own…