Prompt Engineering: 10 Tips to Write Effective Prompts

Katsiaryna Ruksha
9 min readJun 21, 2024

--

Photo by Maarten Deckers on Unsplash

There has been so much fuss around prompt engineering recently! There are courses, books about it, articles with top 100 prompts “to rule them all” and even a job of Prompt Engineer! But the reality is that you don’t need to read a book or get a certificate in order to write an effective prompt using the model via web interface. In this article I’ll give you ten tips of advice that will immediately improve the quality of your prompts and require no coding skills.

But let’s start with the basics!

A prompt is your primary way of communication with the model, it’s any text that you type for the model to process. Roughly a prompt consists of three layers:

  1. Query is your question,
  2. Instructions include your requirements (like “answer the question in 2 words” or “summarize the given text so that each word started with ‘K’”),
  3. Context is a piece of external information which the model needs to perform your request (the text to be summarized, the article that contains an answer to your question, etc.).
Example of three components of a prompt

If you ever tried working with a large language model, you should know that sometimes the model fails to perform your task. There may be a number of reasons for it, some of them are known LLMs’ pain points (like logical and mathematical tasks, citations and hallucinations) but some of them may be a result of your incorrect usage of the model.

I’ll focus on 10 guidelines that will help you to improve your interactions with LLMs based on better formulation, formatting of your requests and some of the basic prompt engineering techniques.

Tip 1 — Be clear

The most important tip of all is to be clear and precise. Give specific instructions what you need to do: summarize, extract, write.

Avoid saying what not to do but say what to do instead.

Be specific: instead of saying “summarize in a few sentences” say “summarize in 2 sentences”.

The more precise your instructions are, the higher are the chances of getting the response to your liking.

Examples for tip 1: Be clear

Tip 2 — Define output format

Ask for a structured output if you need one. This is especially useful when building an application or if you plan to use the response for further processing. You can specify the way of response format from asking for a bullet-point list to JSON or HTML format. You can also provide an example of output format ensuring that you’ll get the well-formatted response.

Example of tip 2: define output format

Tip 3 — Ask for a self-check

It’s always helpful to ask an LLM to check whether the conditions are satisfies. Probably, the most widely example of this rule is to add “If you don’t know the answer, say ‘No information’”.

Example of tip 2: Ask for a self-check

Tip 4 — Use delimiters or tags

Adding structure to your prompt (especially when using context) does make a difference! Our goal is to separate building blocks of a model and let it know when instructions end and context starts.

Besides making our prompt more clear, this tip also helps to avoid prompt injections — the situations when a context or a user input overrides initial instructions. This is a very important issue when building chatbot applications but even if you just use an LLM with some context you still can face a situation like in the second example:

Example of tip 4 — Use delimiters or tags

If I won’t use tags for the text to translate, it’s quite possible that the model will write me a poem instead of translating. Usage of delimiters will help the model to understand my request.

Tip 5 — Role prompting

The idea is to ask an LLM to act according to a certain role. First of all, it helps to adjust style and tone of the response. Imagine that you are to generate two reviews of some pizza place — one in Michelin guide style and the other in an Instagram blogger style. Even if the sentiment is the same, the reviews will be very different in tone.

Secondly, some claim that role prompting may improve the correctness of the answer. Say you start your prompt with “You are a brilliant mathematician who can solve any problem in the world, solve the task: <…>” and they say that such an intro can make the model to calculate the answer correctly. I haven’t seen such miracle but feel free to experiment.

Example of tip 5 — Role prompting

Tip 6 — Limit the context

There are many cases when you may need to process a large document to summarize it or answer a question. Many LLMs compete to enlarge size of their context window but at the same time research shows that some models may have troubles with large context.

“Lost in the Middle: How Language Models Use Long Contexts” by N.F.Liu et al. (2023) show that many LLMs have U-shaped attention curve. That means that an LLM has much higher chances of finding an answer to your question when the answer is at the beginning or the end of the context. But if the correct answer is somewhere in the middle of the text, the chances of finding it drop.

U-shaped attention curve of LLMs (“Lost in the Middle: How Language Models Use Long Contexts” by N.F.Liu et al. (2023))

There may be a few ways of dealing with it but the simplest of all is to truncate your context to include only the relevant abstracts (if it’s possible).

Tip 7 — Show examples

The seventh tip is based on few-shot prompting technique. Few-shot prompting enables in-context learning where we provide demonstrations in the prompt to steer the model to better performance. This is very powerful technique, even showing 1 example may significantly improve the response quality. In some cases you may not even need instructions, like in the example below where the model easily learns what you need from a few labeling examples:

Example of tip 7 — Show examples

Tip 8 — Ask for explanation

The idea of asking the model for explanation comes from Chain-of-Thought prompting technique. It was often noticed that making the model explain its reasoning improved the chances of correct answer for mathematical and logical tasks. It’s also quite important to ask to explain the thinking first and then to provide the answer as otherwise the model can answer incorrectly and try to justify it.

In the example below the chain-of-thought prompting is combined with zero-shot prompting — we provide no examples and ask for explanation:

Example of zero-shot prompting for tip 8 — Ask for explanation

Tip 9 — Provide step by step instructions

Here we combine the few-shot and chain-of-thought prompting. I’ll give the LLM an example of another mathematical task and of my step by step solution. The model will mimic my way of thinking and will get to the answer with higher chances.

Example of tip 9 — Provide step by step instructions

Tip 10 — Split the task into a few

In case of a complicated task, consider splitting it into subtasks and solve them one by one.

Imagine that you need to write a story based on a summary. Instead of asking the model to write the whole story on its own, you can first generate the title, then generate the characters based on the summary. At the 4th step you can generate story beats based on the summary and characters. Then you’ll add locations and dialogs based on the previously generated pieces.

Example of tip 10 — Split the task into a few

This way you’ll get more transparency and control and will have a chance to interfere at any step and correct the LLM’s thinking. This strategy is called prompt chaining and is applicable not only to story writing but to any complicated task as well like solving a mathematical task or defining a treatment plan based on patient’s symptoms and history.

What’s next?

The guidelines I shared though simple can help you boost the performance of LLMs in any application. But sometimes that’s not enough. What are further steps to improve the performance?

There are 5 ways to improve the LLM’s performance:

5 ways to improve LLM’s performance
  • select a different model. The LLMs are different. Some of them are dedicated to a specific domain (like CodeLlama for coding or BioBERT trained on biomedical literature) or to specific tasks (ChatGPT is fine-tuned to participate in conversations while InstructGPT is much more suitable for information extraction tasks). Maybe it’s worth researching and finding a better suitable model for your domain and your problem,
  • edit your context. I talk a little about the importance of using only relevant context but what if you need to be able to get answers based on a few documents? In this situations you may consider building a RAG pipeline which includes a retrieval step — finding the relevant piece of text and then using it as a context for an LLM. You can quickly experiment with building a RAG pipeline via AI assistant of OpenAI.
  • tune the hyperparameters. Hyperparameters are parameters that influence quality of the model responses and out another way to interact with the model. Selecting the hyperparameters is a separate topic but you can explore them at the OpenAI playground.
  • use advanced prompt engineering techniques. I have mentioned only 4 simple prompt engineering techniques but there are much more. Most of them though do require coding skills. I suggest you to read my classification guide on the prompt engineering approaches which also includes suggestions about estimating quality of your prompts.
  • fine-tune your model. The fine-tuning mostly influences the style and tone of the response. To play with it, check out the Fine-tuning page at OpenAI.

I don’t want to sound as an OpenAI advocate. In my work I use different models and GPT will probably be not my first choice. But OpenAI does offer a very intuitive interface to play even with advanced features and no one can argue the fact that GPT models are very powerful. So if you do not want to code, consider playing with OpenAI features.

I often see the issue of too high expectations towards LLMs. LLMs are not a magic tools and they are not humans with similar to you background and experience. They are basically robots with large theoretical knowledge. All the tips that I shared — being clear, structuring the prompt, providing examples and instructions — all of them are based on a single idea:

Don’t make LLMs guess!

Be as clear and detailed in your instructions as it is required and this will help you to get the answer you expect. If it still doesn’t work, consider using more advanced prompt engineering techniques, finetuning, building a RAG pipeline or tuning the hyperparameters.

Don’t forget to check out my guide on prompt engineering and tuning approaches:

--

--