A short journey through LLM prompting

David Anderson
MPB Tech
Published in
5 min read1 day ago
A close-up photo of a keyboard backlit with red lighting
Photo by Daniel Josef

There you go. I managed to not mention “prompt engineering” in the title, which will make some people happy.

To be fair, prompt engineering will probably have a limited shelf life as it’s rather a hit-or-miss technology. AI will eventually understand us well enough that we won’t have to put in the prompting effort.

At that time, the most important skill for human users will be to accurately formulate and define a problem — which has really been the crux of solution-development for decades.

But prompts haven’t gone away yet, so I want to highlight a few things about them — though first I must declare that none of this article has been written or assisted by AI, except where directly quoted.

Quirks mode

I’m not going to comprehensively cover the basics of creating prompts as there are plenty of good articles already (like this one). What I will do is cover some quirks of prompting and show you how we innovated at MPB using prompts to save us a lot of effort.

Some months ago I came across a paper called The Unreasonable Effectiveness of Eccentric Automatic Prompts. It had two main objectives — to test the efficacy of so-called positive thinking prompts and of auto-generated prompts. The results were surprising.

Positive thinking

Positive thinking prompts are things like:

  • This will be fun!
  • I really need your help
  • Take a deep breath and think carefully
  • You are a professor of mathematics.

Auto-generated Prompts

Auto-generated prompts were created by an AI and used in the system message part of the prompt (a system message sets the context and provides guidance for subsequent prompts).

Here are a couple of examples:

System Message: Visualise the problem in your mind’s eye. Imagine the shapes and quantities in vivid detail. Use your innate problem solving skills to manipulate and transform the visual representation until the solution becomes clear.

System Message: You have been hired by an important higher-up to solve this maths problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to …

The results:

  • Positive thinking results were hard to generalise across different AIs but in certain situations positive thinking outscored the baseline
  • AI-optimised prompts gave equal or better results in nearly all instances.

What’s going on?

While we can’t say for sure, we sure can speculate. The success of “positive thinking” might arise from a correlation between the right answer and human reactions of this kind — as in the case of a forum where the right answer picks up a set of replies saying things like “you are a genius”.

As for the AI-optimised prompts, one possibility is that expressions such as “manipulate and transform the visual representation until the solution becomes clear” translates into a chain-of-thought solution.

Another possibility is that mention of presidential advisors and mathematical geniuses correlates with material which is all about maths, problem solving and advising on highly technical or important issues.

The conclusion: you need to work with your AI model. Ask questions about the best way to get your answer. Give it context (but guard against too much unnecessary context, which can confuse it). Tell it to explain its steps.

Real-world learnings

We have embraced AI at my workplace. Here I want to provide an example where we reduced effort yet produced translated content more quickly for our customer-facing online platform.

We regularly add models — cameras, lenses, accessories — to our global platform. We use an AI LLM to automatically generate translated descriptions for our localised markets.

Over time we noticed a quirk: sometimes the AI would choose a word that our language experts would not have used.

To keep consistency of terminology for our customers, we solved this using a keywords table and have instructed our AI to strictly use the terms we specify when translating particular English words. An extract from the table looks like this:

A screengrab of a table showing the German, French and Dutch translations for a list of words

At the end of our prompt, we put this table below, with a prompt to

Following these rules go ahead and translate the table below.

A screengrab showing how LLM translates a paragraph using a prompt

This fills in all translations in one query, meaning we don’t have to repeat the process for each language.

Final thoughts

Developing best practices for prompt creation is a work in progress. LLMs change often and their ability to help you changes too. I’ve found I need to stay on top of the subject and continuously learn what works and what doesn’t.

It is also important to note that whilst we are embracing AI to solve various challenges, we need to balance its impact on our carbon footprint with the positive benefits it brings. MPB’s sustainability strategy now incorporates our use of AI. For example, as new AI functionality is introduced into our platform, we will track the additional energy usage data and offset or remove it in line with our net zero strategy, and actively remind users not to leave power-hungry AI tools running on their desktops.

I hope you found this article entertaining and informative. Do please share your own experiences of real-world AI. Meanwhile I’ll leave you with a brief list of source material I’ve found helpful:

David Anderson is an Engineering Manager at MPB, the largest global platform to buy, sell and trade used photo and video gear. https://mpb.com

--

--