Decoding Prompts: Unveiling The Secrets Of GPT Prompt Engineering

Jan Kammerath
9 min readMar 24, 2024

Writing proper prompts can be a very daunting task. Some people just seem to always get better results out of GPTs than others. To be able to write effective prompts, that result in exactly what you have on your mind, requires understanding of how GPT models work. You need to have a basic understanding of how ChatGPT, Claude, DALL-E, Firefly, Stable Diffusion or Llama2 decode your prompts and how they use them.

Illustration for education and research purposes only! Generated with the RealVisXL30 model in DiffusionBee with “dark fantasy” style using a 582 character prompt with different weights to adjust lighting, sources etc.

In this article, we’ll be diving into the secrets of prompt engineering and how your prompts are handled by the various models step by step. I’ll also explain it in plain English and avoid going into the mathematical background of the different steps. The intention of this article is to give you a solid understanding of these models to prompt more effectively. This article is a hands-on practical guide to effective prompting without being too theoretic or specific.

How GPTs process your prompt

When referring to an AI model in this context, we are talking about a Generative Pre-trained Transformer, or GPT in short. You give it a text and it returns an output which can be a new text, an image, a video or an audio stream. Whatever it was trained to do.

What tensors are and how a GPT “brain” is structured

--

--

Jan Kammerath

I love technology, programming, computers, mobile devices and the world of tomorrow. Check out kammerath.com and follow me on github.com/jankammerath