What is the Engineering part of Prompt Engineering?

Chenzihongsnow
4 min readJul 11, 2024

--

Prompt engineering is the intersection of engineering and art.

The engineering aspect lies in the fact that structured and well-organized inputs often lead to better outputs. The artistic aspect comes from users being able to infuse their own creativity and aesthetic sensibilities, resulting in more compelling generated text.

Some “hearsay” ChatGPT tips:

  • Threaten large language models with death
  • Give large language models a $10 tip
Threaten large language models with death
Give large language models a $10 tip

Some people might excel at crafting prompts and interact naturally with large language models.

However, most people may not be able to (or may not bother to) do so.

Certainly, “if you can speak, you can use a large model,” but not everyone can effectively use a large model even if they can speak. Many people may need additional help to create high-quality prompts for generating satisfactory text.

The reason why large models are not as universally accessible as smartphones is that while the barrier to entry is low, it’s not quite low enough.

From the perspective of an average user, clicking the retry button countless times may still not yield the desired answer.

From a developer’s perspective, spending time day after day tweaking prompts (which might not even be tweaking) can be frustrating. The interaction lacks the ability for editing, reviewing, and iterating, and the inability to modularize and reuse makes it difficult for programmers who would otherwise shy away from becoming prompt engineers.

Since it’s already called “prompt engineering,” why not engineer it?

By “engineering,” I don’t mean scouring the internet for GPT usage insights and jotting them down in a notebook to refer to every time you use a large model.

Prompt engineering is simple, but I aim to make it even simpler. That’s why, within the Polish Your Prompt project, I built a tool that allows anyone to easily become a master of prompts, creating prompts that can be reused throughout your lifetime (and even passed down through generations).

We offer three types of prompt optimizers:

  1. Simple Refiner: Optimizes your prompts with straightforward prefix adjustments.
  2. Schema Refiner: Optimizes your prompts using widely used prompt templates, such as COSTAR, RISE.
  3. Annotated Refiner: Optimizes your prompts through annotated text.

1 Simple Refiner

Prefix your input to LLM and get a better prompt.

You are a professional prompt engineer.
Think carefully, you need to refine the following text to make it more formal and professional.
You need to add more details to the text for better generation.
Please directly output the refined text.
Do not include any additional information.
Refine the following text: '

Of course, you can also choose a custom prefix or input manually (if you don’t mind the hassle).

Using:

prompt = "Write a blog about Python for me."
refiner.refine(prompt)

The refined prompt:

Please compose a comprehensive blog post on the programming language Python, highlighting its key features, applications, and benefits. Additionally, provide insights on its versatility, ease of use, and popularity within the software development community. Thank you.

2 Schema Refiner

Polish Your Prompt supports over 15 prompt frameworks, structured and standardized by mimicking existing popular prompt frameworks. Some you may have heard of, others maybe not, such as APE, BROKE, COSTAR.

Take CO-STAR as an example:

COSTAR breaks down prompts into several components:

  • (C) Context: Provides background information for the task.
  • (O) Objective: Clearly states what you want the large language model to accomplish.
  • (S) Style: Specifies the desired writing style.
  • (T) Tone: Sets the emotional tone for the response.
  • (A) Audience: Identifies the target audience.
  • (R) Response: Specifies the format of the output.

You can specify a framework here to achieve similar outputs.

prompt = "help me do my homework"
refiner = SchemaRefiner()
structure, prompt = refiner.refine(prompt, schema=COSTAR(), mode=MODE.ONE_STEP)

The refined prompt:

# CONTEXT #\nYou are a student in high school taking a math class.\n# OBJECTIVE #\nSeek assistance with completing your homework assignment.\n# STYLE #\nInformal student seeking help.\n# TONE #\nPolite and respectful.\n# AUDIENCE #\nClassmate or tutor who can provide guidance.\n# RESPONSE #\nClear explanation of the homework task and request for assistance.\n

3 Annotated Refiner

When you find discrepancies between the output of a large model and what you envisioned, it’s often due to unclear or insufficiently precise prompts in the original input. In the Annotated Refiner, we can annotate one or more parts of the generated content from the model and provide feedback, allowing it to revise the original prompt, creating a constructive feedback loop.

Here’s a simple example:

refiner = AnnotatedRefiner()
refiner.refine(prompt="help me do my homework",
content="You are a student who needs help with homework. You are struggling with a math problem "
"and need assistance.",
annotations={"math": "not math, but english"})

The refined prompt:

Help me with my English homework.

The project can be found at: https://github.com/ChenZiHong-Gavin/Polish-Your-Prompt

Currently, it offers both source code and a PyPI installation package for use.

You can install it directly using the command:

pip install polish_your_prompt

Future plans may include support for a drag-and-drop interactive interface similar to ComfyUI.

--

--