Member-only story
Your LLM Prompts Suck… here’s how to fix them.
5 tips for writing better prompts
90% of LLM use cases don’t require fine-tuning or agentic AI. They require you to write better prompts. In this article, I’ll review 5 best practices for getting LLMs to do what you want via prompting.
It typically doesn’t take much to make ChatGPT helpful for everyday tasks. However, it's another story to get it to accurately and reliably perform more complicated tasks (e.g., write a detailed proposal for a potential client). For that, we need something more than sentence fragments and half-baked requests.
Prompt engineering is the process of crafting AI model inputs to get good outputs. This could be a message you send to ChatGPT or a system/developer message for a larger AI application.
While finding the best prompt for a given model and use case requires experimentation [1,2]. Here, I’ll discuss 5 key tips you can use to improve your prompts across a wide range of situations.
Note: the focus here is on prompting non-reasoning models…