Apple Just Quietly Exposed The *AI Prompts* Powering Apple Intelligence
I never thought “do not hallucinate” works.
As a guy who’s been using AI since ChatGPT was a milk-fed infant, I know the value of “role-prompting” and the remarkable accuracy it brings to AI responses.
For any AI to generate professional responses, it should be role-prompted.
That way, it assumes itself in the role (like a doctor, advisor, or coach) and delivers far better responses than plain AI.
For instance, a simple role-based, detailed prompt like:
You’re a senior nutritionist with 20+ years of experience and 10,000+ happy customers. Suggest me the best foods to take as a 42-year-old female with Type-2 Diabetes.
yields a much helpful response than a generic prompt like:
I’m a middle-aged woman with diabetes, suggest me best foods.
Even the trillion-dollar corp, Apple, has used this exact technique to instruct its now-in-beta tech “Apple Intelligence”.
The prompts tell it what to do.
And more importantly, what not to do — to keep users safe and to avoid hallucinations.