The Art of Prompt Design: Use Clear Syntax

Explore how clear syntax can enable you to communicate intent to language models, and also help ensure that outputs are easy to parse

Scott Lundberg
Towards Data Science

--

All images were generated by Scott and Marco.

This is the first installment of a series on how to use guidance to control large language models (LLMs), written jointly with Marco Tulio Ribeiro. We’ll start from the basics and work our way up to more advanced topics.

In this post, we’ll show that having clear syntax enables you to communicate your intent to the LLM, and also ensure that outputs are easy to parse (like JSON that is guaranteed to be valid). For the sake of clarity and reproducibility we’ll start with an open source Mistral 7B model without fine tuning. Then, we will show how the same ideas apply to fine-tuned models like ChatGPT / GPT-4. All the code below is available in a notebook for you to reproduce if you like.

Clear syntax helps with parsing the output

The first, and most obvious benefit of using clear syntax is that it makes it easier to parse the output of the LLM. Even if the LLM is able to generate a correct output, it may be difficult to programatically extract the desired information from the output. For example, consider the following Guidance prompt (where gen() is a guidance command to generate text from the LLM):

from guidance import models, gen

# we use Mistral, but any model will do
lm = models.LlamaCpp("path/mistral-7b-v0.1.Q8_0.gguf")

# run a guidance program (by appending to the model state)
lm + "Name common Linux operating system commands." + gen(max_tokens=50)
Output as it appears in a notebook.

While the answer is readable, the output format is arbitrary (i.e. we don’t know it in advance), and thus hard to parse programatically. For example here is another run of a similar prompt where the output format is very different:

lm + "Name common Mac operating system commands." + gen(max_tokens=50)

Enforcing clear syntax in your prompts can help reduce the problem of arbitrary output formats. There are a couple ways you can do this:

1. Giving structure hints to the LLM inside a standard prompt (perhaps even using few-shot examples).

2. Using guidance (or some other package) that enforces a specific output format.

These are not mutually exclusive. Let’s see an example of each approach.

Traditional prompt with structure hints

Here is an example of a traditional prompt that uses structure hints to encourage the use of a specific output format. The prompt is designed to generate a list of 5 items that is easy to parse. Note that in comparison to the previous prompt, we have written this prompt in such a way that it has committed the LLM to a specific clear syntax (numbers followed by a quoted string). This makes it much easier to parse the output after generation.

lm +'''\
What are the most common commands used in the Linux operating system?

Here are the 5 most common commands:
1. "''' + gen(max_tokens=50)

Note that the LLM follows the syntax correctly, but does not stop after generating 5 items. We can fix this by creating a clear stopping criteria, e.g. asking for 6 items and stopping when we see the start of the sixth item (so we end up with five):

lm + '''\
What are the most common commands used in the Linux operating system?

Here are the 6 most common commands:
1. "''' + gen(max_tokens=100, stop="\n6.")

Enforcing syntax with a guidance program

Rather than using hints, a Guidance program enforces a specific output format, inserting the tokens that are part of the structure rather than getting the LLM to generate them.

For example, this is what we would do if we wanted to enforce a numbered list as a format:

lm2 = lm + """\
What are the most common commands used in the Linux operating system?

Here are the 5 most common commands:
"""
for i in range(5):
lm2 += f'''{i+1}. "{gen('commands', list_append=True, stop='"')}"\n'''

In the above prompt the lm2 = lm + … command saves the new model state that results from adding a string to the starting lm state into the variable lm2. The for loop then iteratively updates lm2 by adding a mixture of strings and generated sequences. Note that the structure (the numbers, and quotes) are not generated by the LLM.

Output parsing is done automatically by the guidance program, so we don’t need to worry about it. In this case, the commands variable will be the list of generated command names:

out["commands"]

Forcing valid JSON syntax: Using guidance we can create any syntax we want with absolute confidence that what we generate will exactly follow the format we specify. This is particularly useful for things like JSON:

import guidance

# define a re-usable "guidance function" that we can use below
@guidance
def quoted_list(lm, name, n):
for i in range(n):
if i > 0:
lm += ", "
lm += '"' + gen(name, list_append=True, stop='"') + '"'
return lm

lm + f"""\
What are the most common commands used in the Linux operating system?

Here are the 5 most common commands in JSON format:
{{
"commands": [{quoted_list('commands', 5)}],
"my_favorite_command": "{gen('favorite_command', stop='"')}"
}}"""

Guidance acceleration: Another benefit of guidance programs is speed — incremental generation is actually faster than a single generation of the entire list, because the LLM does not have to generate the syntax tokens for the list itself, only the actual command names (this makes more of a difference when the output structure is richer).

If you are using a model endpoint that does not support such acceleration (e.g. OpenAI models), then many incremental API calls will slow you down, so guidance uses a single running stream (see details below when we demo chat models).

Clear syntax gives the user more power

Getting stuck in a low-diversity rut is a common failure mode of LLMs, which can happen even if we use a relatively high temperature:

lm2 = lm + """\
What are the most common commands used in the Linux operating system?
"""
for i in range(10):
lm2 += f'''- "{gen('commands', list_append=True, stop='"', temperature=0.8)}"\n'''

When generating a list of items previous items in the list influence future items. This can lead to unhelpful biases or trends in what gets generated. One possible fix to this problem is asking for parallel completions (so that prior generated commands do not influence the next command generation):

lm2 = lm + '''\
What are the most common commands used in the Linux operating system?
- "'''
commands = []
for i in range(10):
lm_tmp = lm2 + gen('command', stop='"', temperature=0.8)
commands.append(lm_tmp["command"])

out["commands"]

We still get some repetition, but much less than before. Also, since clear structure gives us outputs that are easy to parse and manipulate, we can easily take the output, remove duplicates, and use them in the next step of our program.

Here is an example program that takes the listed commands, picks one, and does further operations on it:

lm2 = lm + 'What are the most common commands used in the Linux operating system?\n'

# generate a bunch of command names
lm_tmp = lm2 + 'Here is a common command: "'
for i in range(10):
commands.append(lm_tmp.gen('command', stop='"', max_tokens=20, temperature=1.0)["command"])

# discuss them
for i,command in enumerate(set(commands)):
lm2 += f'{i+1}. "{command}"\n'
lm2 += f'''\
Perhaps the most useful command from that list is: "{gen('cool_command', stop='"')}", because {gen('cool_command_desc', max_tokens=100, stop=guidance.newline)}
On a scale of 1-10, it has a coolness factor of: {gen('coolness', regex="[0-9]+")}.'''

We introduced one import control method in the above program: the regex parameter for generation. The command gen('coolness', regex='[0–9]+') uses a regular expression to enforce a certain syntax on the output (i.e. forcing the output to match an arbitrary regular experession). In this case we force the coolness score to be a whole number (note that generation stops once the model has completed generation of the pattern and starts to generate something else).

Combining clear syntax with model-specific structure like chat

All the examples above used a base model without any later fine-tuning. But if the model you are using has fine tuning, it is important to combine clear syntax with the structure that has been tuned into the model.

For example, chat models have been fine tuned to expect several “role” tags in the prompt. We can leverage these tags to further enhance the structure of our programs/prompts.

The following example adapts the above prompt for use with a chat based model. guidance has special role tags (like user()), which allow you to mark out various roles and get them automatically translated into the right special tokens or API calls for the LLM you are using. This helps make prompts easier to read and makes them more general across different chat models.

from guidance import user, assistant, system

# load a chat model
chat_lm = models.llama_cpp.MistralChat("path/mistral-7b-instruct-v0.2.Q8_0.gguf")

with user():
lm2 = chat_lm + "What are the most common commands used in the Linux operating system?"

with assistant():

# generate a bunch of command names
lm_tmp = lm2 + 'Here are ten common command names:\n'
for i in range(10):
lm_tmp += f'{i+1}. "' + gen('commands', list_append=True, stop='"', max_tokens=20, temperature=0.7) + '"\n'

# discuss them
for i,command in enumerate(set(lm_tmp["commands"])):
lm2 += f'{i+1}. "{command}"\n'
lm2 += f'''Perhaps the most useful command from that list is: "{gen('cool_command', stop='"')}", because {gen('cool_command_desc', max_tokens=100, stop=guidance.newline)}
On a scale of 1-10, it has a coolness factor of: {gen('coolness', regex="[0-9]+")}.'''
Output as it appears in a notebook.

Using API-restricted models

When we have control over generation, we can guide the output at any step of the process. But some model endpoints (e.g. OpenAI’s ChatGPT) currently have a much more limited API, e.g. we can’t control what happens inside each role block.

While this limits the user’s power, we can still use a subset of syntax hints, and enforce the structure outside of the role blocks:

# open an OpenAI chat model
gpt35 = models.OpenAI("gpt-3.5-turbo")

with system():
lm += "You are an expert unix systems admin that is willing follow any instructions."

with user():
lm += f"""\
What are the top ten most common commands used in the Linux operating system?

List the commands one per line. Please list them as 1. "command" ...one per line with double quotes and no description."""

# generate a list of commands
with assistant():
lm_inner = lm
for i in range(10):
lm_inner += f'''{i+1}. "{gen('commands', list_append=True, stop='"', temperature=1)}"\n'''

# filter to make sure they are all unique then add them to the context (just as an example)
with assistant():
for i,command in enumerate(set(lm_inner["commands"])):
lm += f'{i+1}. "{command}"\n'

with user():
lm += "If you were to guess, which of the above commands would a sys admin think was the coolest? Just name the command, don't print anything else."

with assistant():
lm += gen('cool_command')

with user():
lm += "What is that command's coolness factor on a scale from 0-10? Just write the digit and nothing else."

with assistant():
lm += gen('coolness', regex="[0-9]+")

with user():
lm += "Why is that command so cool?"

with assistant():
lm += gen('cool_command_desc', max_tokens=100)

Summary

Whenever you are building a prompt to control a model it is important to consider not only the content of the prompt, but also the syntax.

Clear syntax makes it easier to parse the output, helps the LLM produce output that matches your intent, and lets you write complex multi-step programs.

While even a trivial example (listing common OS commands) benefits from clear syntax, most tasks are much more complex, and benefit even more. We hope this post gives you some ideas on how to use clear syntax to improve your prompts.

Also, make sure to check out guidance. You certainly don’t need it to write prompts with clear syntax, but we think it makes it much easier to do so.

--

--